-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
Do you make any DLL calls in your code? The execution trace tool cannot generally track allocations made in DLLs.
-
For the record, when you get a chance to look at LV 2012, there's a big shipping example that I wrote included with it that is every trick I know for putting together "a full LV application", with the exception of hooking a custom Tools >> Options dialog and a custom runtime menu (gotta give you some reason for upgrading to 2013, right?). Included in that is a splash screen that loads and starts running instantly regardless of how large the VI hierarchy gets. In that, I use an animation (it's pretty simple -- just a sequence of boolean LEDs that cycle repeatedly, meant as a placeholder for more splashy graphics you might design for your app) on the splash screen that just repeats over and over while the rest of the app loads in the background. That loading is done just by setting one of the subVIs to be "Load On First Call". For most LV developers, there aren't any surprises, but I figured it was high time there was a single reference implementation that put all the interesting tricks in one place for a "generic application X". Once everyone gets a look at it, I expect a substantial amount of feedback (positive and negative, I hope) that can feedback into it for future development. I'll post more about it when LV 2012 is public.
-
I am NOT particularly knowledgeable about AppBuilder. For the purposes of this post, treat me as a user, not as part of R&D. :-) After you make the scripting changes, do you call the Save Instrument method on the VI? If you do, then I've got no idea. If you don't, then I believe I know the answer: LabVIEW loads a fresh copy of your VIs into a new application instance to do the build. I do not know if this happens before or after the PreBuild VI runs. If it happens after, then I would expect your in memory changes to be loaded into the new app instance, but it could be that we load from disk... I'm honestly not sure. If it happens before, I do know that we will not update from your developer app instance after that copy is made unless the changes are saved to disk (actually, the saving-to-disk rule is the usual rule for forcing an update of the other app instances, but the AppBuilder app instance is special and it might not update even on save, but I *think* that it does).
-
You seem to be implying that I said otherwise. One of us misunderstood something. For that, all you need is a pulsing bar that never moves, and you can build that on your own. What we're talking about is a progress bar that moves forward, and that *does* need to reflect a percent to some degree. Here is what I have heard from various user experience (UX) researchers:1) If you have a progress bar, it must move along with some reflection of the process underneath. It doesn't have to move in time, but it should strive to and it should avoid the 99%-and-hang problem, as that invalidates the whole reason for having a progress bar. 2) Progress bars only move forward. They never backup unless it is clear to the user that something is being undone (i.e. an error occurred during installation and we're backing out the previous part of the install). 3) If you cannot provide a percent progress, you should not have a progress bar. Something like the pulsing bar is better to avoid creating more frustration among users. 4) If the activity is more than a couple seconds, provide a Cancel button and make that Cancel snappy (i.e. don't take forever to cancel the operation). LabVIEW does not follow these guidelines in many areas, but we're trying to fix some of that. If you're writing a new application, those are the guidelines I'd suggest you follow.
-
Short out an array according to the elements of another array
Aristos Queue replied to Ano Ano's topic in LabVIEW General
Ano Ano: These sorts of questions always work best if you can post your best attempt so far and let us help you fix it. If nothing else, go ahead and build the front panel that you want so that everyone is clear what data types you're trying to build up. This sounds like a homework question, and the community will help, but it won't write it for you. Give us something to start with and then we can point you the right way. -
I think Rammer's question is more detailed ... he asks LabVIEW to load a top-level Vi. Can he get any sort of progress information about where LV is in the load process and display a progress bar? Rammer, the answer is no, not in general. We had a new hire join the LV team during the 2012 release cycle, and one of the initial "small" projects he was given to get started with our code base was to try to design such a progress bar system, both for use within LV's internal dialog and possibly exposing hooks for you to create such a progress bar in your code. He ended up pulling in a rather large portion of the LabVIEW team, trying to find a decent solution. There's a fundamental logical barrier to doing this: when a VI loads, there's no way for the top-level VI to have any idea how many subVIs it will end up loading as its full hierarchy loads in. The group who worked on this tried many many approaches to get around this lack of knowledge and still produce a progress bar that only moves forward and doesn't end up with the 99%-and-holding problem. Nothing was ever particularly satisfactory. We concluded the only valid solution is on an application-by-application basis. If you just open a reference to your top-level VI, that will load all the VIs in memory. But you could open a reference to one of its deep subVIs, thus only loading that subtree. Then open a reference to another layer up, then another layer up, and you would update your own progress bar after each of those Open VI Reference calls, with the knowledge of what percent of your VI hierarchy that particular open represented. That new hire moved on to do other projects within 2012, but he continues to check out other apps and strategies for handling this problem generally, so maybe something will pop up in the future, but at the moment, no good ideas are on the table. Note that any strategy that gives us a load progress bar but ultimately makes load take a longer time, like preflighting all the subVIs, is off the table... the last thing LabVIEW needs is to *add* load time in the dev environment when we've made good strides these last couple releases with *subtracting* it!
-
Yeah, but when someone writes a big minecraft map in version Alpha.1 and that map doesn't load in version 1.0, it doesn't potentially sink a $2 million project. With LabVIEW, that's exactly what happens. And then they want us to make it work. At the time I posted the Randomize VI, passwords were not as severely broken as they are today. You can be sure I won't be repeating the mistake of publicly posting a prototype in the future.
-
The password signals that if I'm looking for something I can adjust, there's no reason to look here. Now that scripting is released, that signal is the primary reason for passwords to exist. In that sense, its a time saver.In the case of the call library, there's nothing there to read... it isn't as if you would learn any aspect of G programming to see that call, and the vast majority of them have all of their parameters wired fully to the conpane. In the case of the unreleased features, we may have configured it into the one setup that actually works and almost any adjustment will destabilize it. Or it has some feature that doesn't really work for arbitrary use cases, and the only one that does work is the one we have exposed as a VI. We get people calling us up all the time who have broken into these VIs and want us to fix their system which is no longer working. It's hard to have sympathy for them. We've discussed that if the password protection becomes insufficient generally, we might change to shipping these as built DLLs, so the VIs won't even exist on disk. That may be the better thing to do so there isn't "just a password" standing between users and the diagrams.
-
Objects deleted without call "destroy" vi
Aristos Queue replied to Maite's topic in Object-Oriented Programming
Ulf: There's no garbage collector in LabVIEW. GC is a technical term with specific meanings for programming languages. Say instead that LabVIEW has contracted times when the references will be automatically released. Fernando: A reference -- any reference type -- in LabVIEW is automatically destroyed when the top level VI that created it goes idle. I'm not sure what you're using for your "singleton class" because that's a pretty ill-defined term in LabVIEW. I'm going to assume that you mean you have a Data Value Reference that contains an object and you only create one of those DVRs and just return the same DVR every time someone requests it. That DVR is only going to remain valid as long as the first top-level VI is running. You will need a different mechanism to share references between separate top-level VIs. If you are using DVRs, let me suggest you use a single-element queue instead... give the queue a name at Obtain Queue and that way you'll get a different refnum every time, but each refnum will refer to the same underlying queue. There are lots of comments on LAVA and on ni.com about single-element queues if you need further guidance. He's a new guy... I take it easy on the new guys. :-) -
No idea. Just assume LabVIEW was out for a night of heavy drinking and was hungover the next day. Dock its pay and put it back to work.
-
Why exactly does opening Xnodes give a license error?
Aristos Queue replied to Sparkette's topic in VI Scripting
Yep. -
Futures - An alternative to synchronous messaging
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
drjpowell: Re: 1) Yes. Re: 2) Yes, it is easier to code than watching for all the messages to come back. I wonder, though, if it might also be easier to design a "round robin" message: create a message with a list of processes to visit, send the message to the first one, it adds its info, then passes the message to the next process on the list, coming back to the original process when it is done. That would reduce the "do I have them all yet" bookkeeping and still be consistent with asynch messaging. I've never tried to build anything like that. -
flarn2006: Believe me, I pull passwords off of as many things as I can. I've championed that cause for over a decade now. When I leave a password in place it's because of one of two things: a) mucking with whatever is inside will more likely destabilize it than help it b) there's really nothing inside other than a call library node and locking such trivial diagrams actually makes things easier to work with. If there's something you really want to take the password off of, ask and I'll generally look into it, but I swear, there's nothing that's going to help your LabVIEW experience inside 99% of them. That 1% that are left are pretty much VIs left over from when scripting was not generally available, and even then, the functions therein are usually available through other means. At one point, you said that you don't like not knowing what's going on under there. And yet, you use the various LabVIEW primitives -- Add, Enqueue, TCP Send, etc. Just think of the password protected Vis as being pretty much like those. For the most part, you'll be correct. You're wrong, at least for limited subsets of the block diagrams. And I'm quite sure someone will have the full language reversible within a couple years. It is the way of software. That's why, for me, the passwords are a flag of "you don't want to be messing with this", not "I don't want you to see this." I definitely -- as usual -- do not speak for all of NI on this point. :-)
-
If I had to wager, I'd suggest that your VI is saved with a path to the typedef like c:\typedef.ctl. On Machine A, this typedef is found and loaded. On Machine B, this typedef is missing, so LV searches for it, but finds it almost instantaneously so the Find dialog never even pops up at d:\typedef.ctl. The tricky part is that d:\typedef.ctl exists on both machines, so when you open both typedefs, they look exactly the same and you can't figure out why LV thinks there's a difference. That might not be your problem, but it is a situation that would result in the weirdness you're seeing that I have actually had happen to me in the past.
-
Architecture templates
Aristos Queue replied to TheBoss's topic in Application Design & Architecture
I can't go into any details because the product is not yet released, but LabVIEW 2012 will have a significantly better answer to this question than previous versions of LabVIEW. If you're still looking for an answer to this question come August, check out the new release. -
Futures - An alternative to synchronous messaging
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
With the asynch messaging, there is no polling. The process has one place that it waits for incomming messages. At some point, the asynch message "I have the data you asked for" arrives and the process can act on the delivered data. Until then, the process is asleep, pending a new message, and takes no CPU. Contrast this with the "polling for futures" case, which is "send request to other process, check for messages, if no messages, check future, if no future, check messages, repeat until either new message or future is availalbe." The process never really goes to sleep. It is constantly burning CPU flipping back and forth between the two polls. Futures are a fine idea unless they lead to that fairly expensive polling loop. -
Futures - An alternative to synchronous messaging
Aristos Queue replied to Daklu's topic in Object-Oriented Programming
This thread finally made it to the front of my queue of "topics to dig into". Let's take the basic idea that a future is implemented using a Notifier. Needy Process is the process that needs information from another process. Supplier Process is the process supplying that information. I am choosing these terms to avoid conflict with producer/consumer terminology, especially since the traditional producer loop could be the needy loop in some cases. First I want to highlight one variation of asynchronous messages, a particular style of doing the asynchronous process that Daklu describes in his first post. If Needy Process is going to get information from Supplier Process using asynchronous messages, it might do this: Needy creates a message to send to Supplier that includes a description of the data needed and a block of data we'll call "Why" for now. Supplier receives the message. It creates a new message to send to Needy. That message includes the requested data and a copy of the Why block. Needy receives the message. The "Why" block's purpose now becomes clear: it is all the information that Needy had at the moment it made the request about why it was making the request and what it needed to do next. It now takes that block in combination with the information received from Supplier and does whatever it was wanting to do originally. There's nothing revolutionary about those steps -- please don't take this as me trying to introduce a new concept (especially not to Daklu who knows this stuff well). I'm highlighting this pattern because it shifts who is responsible for storing the state data from the Needy Process' own state to the state of the message class. This technique can dramatically simplify the state data storage problem because Needy no longer needs to store an array of "Why" blocks and figure out some sort of lookup ID for figuring out which response from Supplier goes with which task. It also means that most of the time, Needy isn't carrying around all that extra state data during those times when it isn't actively requesting information from Supplier. Why is this variation of interest when thinking about futures? I'm ok with the general concept of futures ... indeed, without actually naming them as such, I've used variations on this theme. I do want to highlight some details that I think are noteworthy. Do futures really avoid saving state when compared to asynch messages. I will agree that the *type* of the state information that must be stored is different, but not necessarily the quantity or complexity. Needy Process creates a notifier and sends that notifier to Supplier Process. And then Needy Process has to hold onto the Notifier refnum. That's state data right there. That four byte number has to be stored as part of Needy Process, whether it is in the shift register of the loop itself or stored in some magic variable. If there are multiple simultaneous requests to Supplier for different bits of information, then it becomes an array of Notifier refnums. In the original post, Needy is described as "knowing that it will eventually need information". But something still has to trigger it to actually try to use that information. In both of Daklu's posts, there is a secondary *something* that triggers that data to be used. In one, it is the five second timeout that says, "Ok, it's a good time for me to get that data." In the second, it is an event "MeanCalculated" that fires. Both of those event systems have state overhead. Now, it is state behind the scenes of LabVIEW, and that does mean you, as a programmer, do not have to write code to store that state, but it is there. Finally, be careful that these futures do not turn into polling loops. It would be very easy to imagine Needy creates the Notifier, sends it to Supplier, and then goes and does something, comes back, checks the Notifier with a timeout of zero milliseconds to see "is it ready yet?" and then rushes off to do some other job if it isn't ready. If you have to introduce a new state to check the notifier, you're on a dark dark path. And I've seen this happen in code. In fact, it happens easily. The whole point of futures is that Needy *knows* it will need this data shortly. So it sends the request, then it does as much work as it can, but eventually it comes around to the point where it needs that data. What happens when Needy gets to the Wait For Notifier primitive and the data isn't ready yet? It waits. And right then you have defeated much of the purpose of the rest of your asynchronous system. Now, you can say, "Well, I got all the work I knew about done in the meantime, and this process doesn't get instructions from the outside world, so if it waits a bit, I still have done everything I could in the meantime." But there is one message, one key message, that you can never know whether it is coming or not: Stop. The instruction to Stop will not wake up the Wait For Notification primitive. Stop will be sitting in Needy's message queue, waiting to be processed, but gets ignored because it is waiting on a notifier. Crisis? Depends on the application. Certainly it can lead to a sluggish UI shutdown. If you want an example of that bad behavior, come August, take a look at the new shipping example I've put into LabVIEW 2012. User hits the stop button and the app can hang for a full second because of one wait instruction deep in one part of the code. I've thought about refactoring it, but it makes a nice talking point for an example application. So, in my opinion, this concept of futures is a good concept to have in one's mental toolbox, but one that should be deployed cautiously. I'd put it on the list of Things We Use Sparingly as less common than Sequence Structures but more common than global variables. -
And this, ladies and gentlemen, is why any time you have heard me speak in the last three years, I have harped on one point in almost every speech: the importance of buddying code. Nothing -- NOTHING -- does more to catch bugs and correct architecture mistakes that will bite you in the future than having a second set of eyes look over your code. If you have a team, buddy your code. If you are a lone developer or contractor, find someone else in a similar role in your community and buddy each other's code. It will help A LOT. I promise.
-
A friend in high school compiled his own operating system kernel that assumed all EXEs were encoded with an extra byte after each byte and when it loaded the EXEs into memory, it dropped every second byte from the file. The result was that only EXEs that he had deliberately salted with pad bytes could run on this machine. This was a major line of defense in the war to keep the high school computer lab running despite all variations of malware being tracked in by various parties. If a program hadn't gone through his specific blessing tool, it wouldn't run when loaded on those machines.
-
Should be easy enough to wrap the interesting calls in a static VI wrapper.I'm not even sure how I would define the behavior to work for dyn dispatch... should it log every call to just the one VI that is halo'ed? Every call? How about calls that happen through the Call Parent Node? If it is every call, what about calls that are explicitly to a higher level of inheritance? What about calls that are to a lower level of inheritance? Not risky in the same sense. This feature works, and works well, exactly as designed. It hasn't been depricated or anything like that. It just hasn't been polished in a while and if you have questions, it means a lot of "oh, how did that go again?" research on the part of folks here at NI. :-)
-
At the risk of causing heartburn and panic among my fellows in LV R&D, there is a feature in LabVIEW that will do what you just asked for as far as recording is concerned. The reason it may cause panic is because it is so rarely used, so although the test suite keeps passing, it hasn't had any developers work on it in well over a decade. And yet it is still there. We kind of loathe this feature because keeping it working has required some extra complexity in some new features, and we've talked about killing it. Me advocating it as a solution runs the risk of breathing new life into it. :-) I give you this intro so that you understand: the UI is a bit rickety, but it works, at least for its intended original use case. Right click on any subVI node. In the menu, you'll see "Enable Database". You probably have never used this menu item (I've polled huge crowds of LV users and I almost never get anyone who knows what it does unless there's a LV R&D teammate in the room). "Enable Database" will cause a "halo" to appear around the node. All the input terminals turn into output terminals, and the halo has additional terminals. When this node is part of your application, any calls to the same subVI as the halo'd subVI get logged -- the inputs and the outputs. When the halo'd node executes, it takes as input an integer that is a call ID. This allows you to retrieve the conpane of that subVI as it was the Nth time it was executed. I know what this feature does for static dispatch, non-reentrant subVIs. For anything else, well, I will bet that it won't crash, but I've got no idea what the defined behaviors would be. I'm 90% certain this feature does not work for dynamic dispatch VIs (I recall consciously disabling it, but someone else on my team may have hooked it up at some point). I have no idea what its behavior is for reentrant VIs. Play around with it. See how it works. It may be slower than you want (I've got no idea what the data structure that it uses for a database looks like). It may have memory issues (I know it doesn't do any disk caching or anything like that). But perhaps it has value to you.
-
[LVTN] JGCODE Preferences Dialog Library
Aristos Queue replied to jgcode's topic in End User Support
For the timing of the changes: I forgot that the date that we give the beta to you is a few days after we build the final version from our source code. Yes, the changes are only in Beta 2 -- they were submitted in the small window after we cut our final image but before it was actually released to you, and I compared against the actual release date.