Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. I think Rammer's question is more detailed ... he asks LabVIEW to load a top-level Vi. Can he get any sort of progress information about where LV is in the load process and display a progress bar?

    Rammer, the answer is no, not in general. We had a new hire join the LV team during the 2012 release cycle, and one of the initial "small" projects he was given to get started with our code base was to try to design such a progress bar system, both for use within LV's internal dialog and possibly exposing hooks for you to create such a progress bar in your code. He ended up pulling in a rather large portion of the LabVIEW team, trying to find a decent solution.

    There's a fundamental logical barrier to doing this: when a VI loads, there's no way for the top-level VI to have any idea how many subVIs it will end up loading as its full hierarchy loads in. The group who worked on this tried many many approaches to get around this lack of knowledge and still produce a progress bar that only moves forward and doesn't end up with the 99%-and-holding problem. Nothing was ever particularly satisfactory.

    We concluded the only valid solution is on an application-by-application basis. If you just open a reference to your top-level VI, that will load all the VIs in memory. But you could open a reference to one of its deep subVIs, thus only loading that subtree. Then open a reference to another layer up, then another layer up, and you would update your own progress bar after each of those Open VI Reference calls, with the knowledge of what percent of your VI hierarchy that particular open represented.

    That new hire moved on to do other projects within 2012, but he continues to check out other apps and strategies for handling this problem generally, so maybe something will pop up in the future, but at the moment, no good ideas are on the table. Note that any strategy that gives us a load progress bar but ultimately makes load take a longer time, like preflighting all the subVIs, is off the table... the last thing LabVIEW needs is to *add* load time in the dev environment when we've made good strides these last couple releases with *subtracting* it!

  2. Yeah, but when someone writes a big minecraft map in version Alpha.1 and that map doesn't load in version 1.0, it doesn't potentially sink a $2 million project. With LabVIEW, that's exactly what happens. And then they want us to make it work.

    At the time I posted the Randomize VI, passwords were not as severely broken as they are today. You can be sure I won't be repeating the mistake of publicly posting a prototype in the future.

  3. Can you expand on this? I'm not making the connection as to why it would be easier.

    For me, it's never about not trusting code I can't read - all of us do that all the time, it's practically unavoidable. It's more about knowing that there's something I could read and there's just one password between me and it.

    The password signals that if I'm looking for something I can adjust, there's no reason to look here. Now that scripting is released, that signal is the primary reason for passwords to exist. In that sense, its a time saver.

    In the case of the call library, there's nothing there to read... it isn't as if you would learn any aspect of G programming to see that call, and the vast majority of them have all of their parameters wired fully to the conpane. In the case of the unreleased features, we may have configured it into the one setup that actually works and almost any adjustment will destabilize it. Or it has some feature that doesn't really work for arbitrary use cases, and the only one that does work is the one we have exposed as a VI. We get people calling us up all the time who have broken into these VIs and want us to fix their system which is no longer working. It's hard to have sympathy for them.

    We've discussed that if the password protection becomes insufficient generally, we might change to shipping these as built DLLs, so the VIs won't even exist on disk. That may be the better thing to do so there isn't "just a password" standing between users and the diagrams.

  4. Ulf: There's no garbage collector in LabVIEW. GC is a technical term with specific meanings for programming languages. Say instead that LabVIEW has contracted times when the references will be automatically released.

    Fernando: A reference -- any reference type -- in LabVIEW is automatically destroyed when the top level VI that created it goes idle. I'm not sure what you're using for your "singleton class" because that's a pretty ill-defined term in LabVIEW. I'm going to assume that you mean you have a Data Value Reference that contains an object and you only create one of those DVRs and just return the same DVR every time someone requests it. That DVR is only going to remain valid as long as the first top-level VI is running. You will need a different mechanism to share references between separate top-level VIs. If you are using DVRs, let me suggest you use a single-element queue instead... give the queue a name at Obtain Queue and that way you'll get a different refnum every time, but each refnum will refer to the same underlying queue. There are lots of comments on LAVA and on ni.com about single-element queues if you need further guidance.

    You may regret saying that :D
    He's a new guy... I take it easy on the new guys. :-)
  5. drjpowell:

    Re: 1) Yes.

    Re: 2) Yes, it is easier to code than watching for all the messages to come back. I wonder, though, if it might also be easier to design a "round robin" message: create a message with a list of processes to visit, send the message to the first one, it adds its info, then passes the message to the next process on the list, coming back to the original process when it is done. That would reduce the "do I have them all yet" bookkeeping and still be consistent with asynch messaging. I've never tried to build anything like that.

    • Like 1
  6. flarn2006: Believe me, I pull passwords off of as many things as I can. I've championed that cause for over a decade now. When I leave a password in place it's because of one of two things:

    a) mucking with whatever is inside will more likely destabilize it than help it

    b) there's really nothing inside other than a call library node and locking such trivial diagrams actually makes things easier to work with.

    If there's something you really want to take the password off of, ask and I'll generally look into it, but I swear, there's nothing that's going to help your LabVIEW experience inside 99% of them. That 1% that are left are pretty much VIs left over from when scripting was not generally available, and even then, the functions therein are usually available through other means.

    At one point, you said that you don't like not knowing what's going on under there. And yet, you use the various LabVIEW primitives -- Add, Enqueue, TCP Send, etc. Just think of the password protected Vis as being pretty much like those. For the most part, you'll be correct.

    Or at least not that I know of .. correct me if i'm wrong here ;)
    You're wrong, at least for limited subsets of the block diagrams. And I'm quite sure someone will have the full language reversible within a couple years. It is the way of software. That's why, for me, the passwords are a flag of "you don't want to be messing with this", not "I don't want you to see this." I definitely -- as usual -- do not speak for all of NI on this point. :-)
  7. If I had to wager, I'd suggest that your VI is saved with a path to the typedef like c:\typedef.ctl. On Machine A, this typedef is found and loaded. On Machine B, this typedef is missing, so LV searches for it, but finds it almost instantaneously so the Find dialog never even pops up at d:\typedef.ctl. The tricky part is that d:\typedef.ctl exists on both machines, so when you open both typedefs, they look exactly the same and you can't figure out why LV thinks there's a difference.

    That might not be your problem, but it is a situation that would result in the weirdness you're seeing that I have actually had happen to me in the past.

  8. So I'm a bit confused: what's especially bad about this use of Futures?

    With the asynch messaging, there is no polling. The process has one place that it waits for incomming messages. At some point, the asynch message "I have the data you asked for" arrives and the process can act on the delivered data. Until then, the process is asleep, pending a new message, and takes no CPU. Contrast this with the "polling for futures" case, which is "send request to other process, check for messages, if no messages, check future, if no future, check messages, repeat until either new message or future is availalbe." The process never really goes to sleep. It is constantly burning CPU flipping back and forth between the two polls. Futures are a fine idea unless they lead to that fairly expensive polling loop.
  9. This thread finally made it to the front of my queue of "topics to dig into".

    Let's take the basic idea that a future is implemented using a Notifier. Needy Process is the process that needs information from another process. Supplier Process is the process supplying that information. I am choosing these terms to avoid conflict with producer/consumer terminology, especially since the traditional producer loop could be the needy loop in some cases.

    First I want to highlight one variation of asynchronous messages, a particular style of doing the asynchronous process that Daklu describes in his first post. If Needy Process is going to get information from Supplier Process using asynchronous messages, it might do this:

    1. Needy creates a message to send to Supplier that includes a description of the data needed and a block of data we'll call "Why" for now.
    2. Supplier receives the message. It creates a new message to send to Needy. That message includes the requested data and a copy of the Why block.
    3. Needy receives the message. The "Why" block's purpose now becomes clear: it is all the information that Needy had at the moment it made the request about why it was making the request and what it needed to do next. It now takes that block in combination with the information received from Supplier and does whatever it was wanting to do originally.

    There's nothing revolutionary about those steps -- please don't take this as me trying to introduce a new concept (especially not to Daklu who knows this stuff well). I'm highlighting this pattern because it shifts who is responsible for storing the state data from the Needy Process' own state to the state of the message class. This technique can dramatically simplify the state data storage problem because Needy no longer needs to store an array of "Why" blocks and figure out some sort of lookup ID for figuring out which response from Supplier goes with which task. It also means that most of the time, Needy isn't carrying around all that extra state data during those times when it isn't actively requesting information from Supplier.

    Why is this variation of interest when thinking about futures? I'm ok with the general concept of futures ... indeed, without actually naming them as such, I've used variations on this theme. I do want to highlight some details that I think are noteworthy.

    Do futures really avoid saving state when compared to asynch messages. I will agree that the *type* of the state information that must be stored is different, but not necessarily the quantity or complexity.

    Needy Process creates a notifier and sends that notifier to Supplier Process. And then Needy Process has to hold onto the Notifier refnum. That's state data right there. That four byte number has to be stored as part of Needy Process, whether it is in the shift register of the loop itself or stored in some magic variable. If there are multiple simultaneous requests to Supplier for different bits of information, then it becomes an array of Notifier refnums.

    In the original post, Needy is described as "knowing that it will eventually need information". But something still has to trigger it to actually try to use that information. In both of Daklu's posts, there is a secondary *something* that triggers that data to be used. In one, it is the five second timeout that says, "Ok, it's a good time for me to get that data." In the second, it is an event "MeanCalculated" that fires. Both of those event systems have state overhead. Now, it is state behind the scenes of LabVIEW, and that does mean you, as a programmer, do not have to write code to store that state, but it is there.

    Finally, be careful that these futures do not turn into polling loops. It would be very easy to imagine Needy creates the Notifier, sends it to Supplier, and then goes and does something, comes back, checks the Notifier with a timeout of zero milliseconds to see "is it ready yet?" and then rushes off to do some other job if it isn't ready. If you have to introduce a new state to check the notifier, you're on a dark dark path. And I've seen this happen in code. In fact, it happens easily. The whole point of futures is that Needy *knows* it will need this data shortly. So it sends the request, then it does as much work as it can, but eventually it comes around to the point where it needs that data. What happens when Needy gets to the Wait For Notifier primitive and the data isn't ready yet? It waits. And right then you have defeated much of the purpose of the rest of your asynchronous system. Now, you can say, "Well, I got all the work I knew about done in the meantime, and this process doesn't get instructions from the outside world, so if it waits a bit, I still have done everything I could in the meantime." But there is one message, one key message, that you can never know whether it is coming or not: Stop. The instruction to Stop will not wake up the Wait For Notification primitive. Stop will be sitting in Needy's message queue, waiting to be processed, but gets ignored because it is waiting on a notifier. Crisis? Depends on the application. Certainly it can lead to a sluggish UI shutdown. If you want an example of that bad behavior, come August, take a look at the new shipping example I've put into LabVIEW 2012. User hits the stop button and the app can hang for a full second because of one wait instruction deep in one part of the code. I've thought about refactoring it, but it makes a nice talking point for an example application.

    So, in my opinion, this concept of futures is a good concept to have in one's mental toolbox, but one that should be deployed cautiously. I'd put it on the list of Things We Use Sparingly as less common than Sequence Structures but more common than global variables.

    • Like 1
  10. And this, ladies and gentlemen, is why any time you have heard me speak in the last three years, I have harped on one point in almost every speech: the importance of buddying code. Nothing -- NOTHING -- does more to catch bugs and correct architecture mistakes that will bite you in the future than having a second set of eyes look over your code. If you have a team, buddy your code. If you are a lone developer or contractor, find someone else in a similar role in your community and buddy each other's code. It will help A LOT. I promise.

  11. A friend in high school compiled his own operating system kernel that assumed all EXEs were encoded with an extra byte after each byte and when it loaded the EXEs into memory, it dropped every second byte from the file. The result was that only EXEs that he had deliberately salted with pad bytes could run on this machine. This was a major line of defense in the war to keep the high school computer lab running despite all variations of malware being tracked in by various parties. If a program hadn't gone through his specific blessing tool, it wouldn't run when loaded on those machines.

  12. While searching for a solution I went over most options I could put my hands on and this was one of the winners.

    It gave me some trouble in freezing a vi since it tried to record something and I didn't take it into concideration and as for dynamic vis, it misses the point for HAL which is OO and in some points reentrant.

    Should be easy enough to wrap the interesting calls in a static VI wrapper.

    I'm not even sure how I would define the behavior to work for dyn dispatch... should it log every call to just the one VI that is halo'ed? Every call? How about calls that happen through the Call Parent Node? If it is every call, what about calls that are explicitly to a higher level of inheritance? What about calls that are to a lower level of inheritance?

    but now I see even some right click features are risky.

    Not risky in the same sense. This feature works, and works well, exactly as designed. It hasn't been depricated or anything like that. It just hasn't been polished in a while and if you have questions, it means a lot of "oh, how did that go again?" research on the part of folks here at NI. :-)

  13. At the risk of causing heartburn and panic among my fellows in LV R&D, there is a feature in LabVIEW that will do what you just asked for as far as recording is concerned.

    The reason it may cause panic is because it is so rarely used, so although the test suite keeps passing, it hasn't had any developers work on it in well over a decade. And yet it is still there. We kind of loathe this feature because keeping it working has required some extra complexity in some new features, and we've talked about killing it. Me advocating it as a solution runs the risk of breathing new life into it. :-) I give you this intro so that you understand: the UI is a bit rickety, but it works, at least for its intended original use case.

    Right click on any subVI node. In the menu, you'll see "Enable Database". You probably have never used this menu item (I've polled huge crowds of LV users and I almost never get anyone who knows what it does unless there's a LV R&D teammate in the room). "Enable Database" will cause a "halo" to appear around the node. All the input terminals turn into output terminals, and the halo has additional terminals.

    When this node is part of your application, any calls to the same subVI as the halo'd subVI get logged -- the inputs and the outputs. When the halo'd node executes, it takes as input an integer that is a call ID. This allows you to retrieve the conpane of that subVI as it was the Nth time it was executed.

    I know what this feature does for static dispatch, non-reentrant subVIs. For anything else, well, I will bet that it won't crash, but I've got no idea what the defined behaviors would be. I'm 90% certain this feature does not work for dynamic dispatch VIs (I recall consciously disabling it, but someone else on my team may have hooked it up at some point). I have no idea what its behavior is for reentrant VIs.

    Play around with it. See how it works. It may be slower than you want (I've got no idea what the data structure that it uses for a database looks like). It may have memory issues (I know it doesn't do any disk caching or anything like that). But perhaps it has value to you.

  14. For the timing of the changes: I forgot that the date that we give the beta to you is a few days after we build the final version from our source code. Yes, the changes are only in Beta 2 -- they were submitted in the small window after we cut our final image but before it was actually released to you, and I compared against the actual release date.

  15. JGCode: Have you tried this in LV 2012? Opening and running your example project, it is taking 24 seconds to launch the Preferences Dialog and between 5 and 10 seconds to switch pages within the dialog. No idea why. Just doing Tools>>Options within LV comes up within half a second (human counting, didn't bother actually benchmarking). (I would try installing into my LV 2011 to compare, but VI Package Manager won't talk to LV 2011 on my machine... keeps complaining about the VI Server security settings no matter how I set them... it's almost certainly my fault -- I've got my 2011 loaded with almost every module and who knows how many packages in an attempt to replicate a weird CAR report, but until I get that untangled, I don't have a 2011 to look at.)

    • Like 1
  16. The biggest problem with using string for common data types is that it leaves the formatting of that data up to each individual Serializable class to define the format. If you have N objects encoded into a file each of a different class and each one has a timestamp field, you can end up with N different formats for the strings. On the other hand, if we give Formatter alone knowledge of the timestamp (and other types of interest), it can have methods to control the formatting and parsing, and then we leave those off of the PropertyBag class. I'll draw it up and see what that looks like.

  17. > I presume your reluctance to support many of the types in LabVIEW

    ShaunR: That's part of it. Just as large a concern is the complexity added to developers of Serializers having to work with all the types, and the work that Formatters having to handle all of the types.

    I do keep looking at JSON's 5 data types and thinking, "Maybe that would be enough." But I look at types like timestamp and path, and I know people would rather not have to parse those in every serializer or serializable, and *those* *aren't* *objects*. That historical fact keeps raising its ugly head. They don't have any ability to add their components piecemeal or to define themselves as a single string entity.

  18. I forgot to mention that I also like complex waveforms for representing (2D) vector data, so that'd be a nice thing for a serialiser to grok.
    Since I'm not planning to support complex as a scalar type, complex waveforms would be particularly nasty to support. I think we'd have to admit the scalar complex first.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.