Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Posts posted by Aristos Queue

  1. QUOTE (flarn2006 @ Mar 21 2009, 02:29 PM)
    Also, to respond to Yair's comment, maybe someone on the forum could find a method, advertise on the forum that they can do it, but not tell how for fear of NI finding it. They can't "fix" something if they don't know what it is, right?
    Ah, but we can, and have, randomly changed things between versions, just on the off chance that something is working that shouldn't be. :-)

  2. QUOTE (bsvingen @ Mar 21 2009, 01:25 AM)

    We call that an array.

    QUOTE (Aristos Queue @ Mar 20 2009, 04:15 PM)

    Grrr... A bug report against the documentation will be filed shortly. That
    should
    read:

    It appears this documentation was already changed in LV8.6. You must've been quoting an earlier version of the Obtain Queue documentation. In any case, it was wrong no matter which version of LV you were using Obtain Queue.

  3. QUOTE (Michael Aivaliotis @ Mar 20 2009, 10:56 AM)

    You might have some point here but understand that you are arguing against an ingrained and taught (even by NI) philosophy that says you should never abort a VI (or use ctrl+period). Using abort has very limited uses. I can see it used if you are absolutely sure that it won't cause problems or if you've designed it in to be aborted from the start.
    It's that last bit that I'm contemplating -- designing for abort. If you design the state machine to be aborted. Yedinak mentioned a bunch of stuff besides references that may need to be cleaned up. All of that could be back in the main VI, sitting right after the call to the Run VI method.

    Your biggest danger is still with the hardware interactions -- don't let anything I post here dissuade you from that. But I am suggesting that "Never use the abort button" is very different from "never use the Abort method", maybe. The Abort button you might use on just any VI, with no clue as to what you're aborting or how safe it is. The Abort method you'd be laying in as a deliberate stop mechanism.

    Anyway... idle musings from a C++ programmer. One of you G programmers should explore this idea and let me know how it turns out.

  4. QUOTE (flarn2006 @ Mar 20 2009, 06:32 PM)

    Didn't we first find out about VI scripting through VI's that NI forgot to password-protect? When password-protected VI's are run, the computer surely accesses the block diagram at one point or another, so obviously it is possible to read the block diagram. LabVIEW just won't let us see it. Can anyone find any way to force it to show you? Perhaps by hex-editing the files or even using Cheat Engine or something?

    Nope. When VIs are run, the assembly code that the compiler produced is executed. The block diagram never even gets loaded into memory. Heck, it isn't even saved as part of the VI when you build for the Run Time Engine.

  5. QUOTE (Gary Rubin @ Mar 20 2009, 12:19 PM)

    That assumption was based on the following from the Obtain Queue help:

    Note max queue size only limits the number of elements in the queue. It does not preallocate memory. Therefore, resizable data types such as paths, strings, arrays, and so on can still increase and decrease the overall queue size

    Grrr... A bug report against the documentation will be filed shortly. That should read:

    QUOTE

    max queue size
    only limits the number of elements in the queue. It does not preallocate that many elements in the queue. If you want to preallocate your queue, enqueue that many elements and then flush the queue. The space, once allocated, will remain allocated for further use of the queue.

    Enqueueing and dequeueing resizable data types, such as paths, strings and arrays, do not affect the memory of queues. The queues are used to move data around, but they do not generate copies of the data.

  6. QUOTE (Neville D @ Mar 18 2009, 04:45 PM)

    Aborting camera applications on RT causes the IMAQdx driver to be left hanging and the camera cannot be accessed until the system is rebooted, so I would say this is bad.
    Ok, but those references could be open and closed by the still-running, not-aborted UI VIs. The state machine that's doing work that needs to be stopped doesn't have to do the cleanup. If you can't open all the needed references before the state machine starts running, you could have the state machine call back to the original VI hierarchy (though posting a message and waiting for response using two queues or user events) so that the orig hierarchy can open references on the state machine's behalf. That way those references survive the abort. Then when it does abort, the original VIs take care of closing any references that weren't closed explicitly by the state machine.

    I'm still not saying this method is good or bad; I'm just walking through arguments to see if this path works. It seems like a more effective way to stop a process than having to code boolean checks all over your code, especially when LV has such nice hooks right in the assembly code at the end of each clump of nodes to detect abort.

  7. QUOTE (shoneill @ Mar 19 2009, 04:59 PM)

    An interesting question is: what happens if I use a class which inherits from a different version of my parent class? What if there's a method VI more or less in the new Parent class, will the plug-in architecture still work? Is there ANY flexibility in the run-time dynamic dispatch (or inheritance) linking?
    Yes.

    The child class does not record whether the VI is an override of the parent or not. It doesn't care. So if the parent adds or removes an implementation, callers of the child class don't care.

  8. Another solution would be having the UI be VI A and the state machine be VI B and instead of calling VI B as a subVI, call it using a VI Reference and the Run method, then when user hits the STOP button, you call the Abort method of the VI reference. Your app as a whole keeps running, but that state machine stops.

    Is that more or less dirty? Can it be made acceptable somehow?

  9. QUOTE (jdunham @ Mar 17 2009, 03:20 PM)

    and since a dialog will come up before it's possible to validate the file, there is no way to safely use the binary data file functions in an industrial application. Is my understanding correct?
    No. The way to safely use the binary data file functions is to write down information as a header that you can recognize the correctness of the file as one of your own files.

    What behavior would you want from LabVIEW? Should we secretly record some random bits that LV checks that says, "Yep, we wrote this file." That would make it mighty hard to output some specific file format -- for example, a .png file. If LV output those secret bytes in the heading of every file, you'd never be able to write a .png. Or any other format.

    QUOTE

    There's no reason except for bad file format design that a proprietary format can't have a header that identifies the file type and some kind of data validation scheme.

    Except the LV binary prims are NOT designed to output a proprietary file format. They output the binary strings as requested by you, the user.

    QUOTE

    Both my local graphics editors put up an error: "This is not a valid PNG file." (more or less).

    And I guarantee that I can put together a file that wouldn't. As I said, it matters how close to being a valid file it was. PNG is probably not the best example. But there are plenty of graphics formats that are packed up, that assume they need to be unpacked, and you can put data in that will make the system think it needs to unzip as a gigantic system.

  10. QUOTE (Ton @ Mar 17 2009, 04:42 PM)

    Forgive my ignorance, but why should this be padded data?
    I forgive you. But I can't really explain it. I don't remember why it was done.

    QUOTE

    How does this go for LabVIEW 64 bit? is the padding than 8 bytes?

    No change. It would be a real problem if it was changed. The format for flat information has to be the same no matter where it was flattened so anyone can unflatten it.

  11. QUOTE (Jeffrey Habets @ Mar 17 2009, 09:14 AM)

    I'd qualify this as buggy behaviour. What are your thoughts?

    When LV is told to unflatten a string, we do our best to interpret it as the data type you claim it to be.

    If you flatten an eight-byte double as a string, then tell us to unflatten that string as a 4-byte integer, we're going to read the first four bytes. On the other hand, if you flatten a double and try to unflatten it as a string, we're going to treat the first four bytes of that data as the length of the string. Since this is likely a VERY large number, we will then try to allocate an array of that size, and we often run out of memory trying to do that. So depending upon exactly what you are flattening and unflattening, you may get the more helpful "data corrupt" errors, or you may get the "out of memory" errors. Pot luck depending on how close the data matches something that is parsable.

    It's not a bug -- it is LV doing exactly what you told it to do. This behavior applies regardless of the data type you're unflattening, including LV classes. And it is not unique to LabVIEW. Try renaming a random file as ".png" and then ask a paint program to open it. You'll get any number of strange behaviors.

    The trick is to save your data files with a unique file extension and then restrict your users to only picking files with that extension.

  12. QUOTE (Neville D @ Mar 16 2009, 12:12 PM)

    Selecting inappropriate hardware (or using hardware incorrectly like not accounting for ground loops or common mode voltages) because of a lack of application-specific knowledge can be an expensive mistake.
    Contrast: If the project is "our company maintains public fountains; we need a device that we can trigger remotely to drain the fountain and then drive around picking up coins that people have thrown in", I may decide that I need an iPhone app that controls a Roomba platform. That's a very different type of hardware decision -- and one that may have been made before you were even hired. I know of a few projects where the hardware stack was in place and then they brought in the software engineer to make it all work out.

    If the job includes solder and screws, you need to know hardware details. But if you're a software engineer, the need to know hardware decreases as you move from where your software drives the hardware to where your software controls the hardware, yet both are still within the typical domain of LabVIEW. If you aren't hardware savvy, that's where to be looking for job opportunities.

  13. QUOTE (JohnRH @ Mar 16 2009, 05:29 AM)

    I suspect that very few LabVIEW programmers do nothing but write software.
    A true pity. Of the items on your list, I would only hit #5, and you might consider me "overqualified" as far as basic math is concerned, and maybe #1. QUOTE

    1)
    solid
    understanding of computers and networking

    2) basic understanding of serial communication protocols (RS232/485)

    3) electronics! (at LEAST enough to design basic DAQ setups)

    4) comfortable using an oscilloscope

    5) basic math (calculus and statistics)

    But I have worked with various hardware teams over the years, and I've discovered that the further you get from signal processing and the closer you get to industrial control, the less you need to understand the hardware. Once you have an API that provides control over a motor and another API that acquires a picture from a camera, then its all math and software to figure out how to spin the motor such that the robot arm moves to a specific spot in the image and picks up the target object. You still need to understand the limitations that the hardware places on the software -- memory limits, data type restrictions, processor speed, available paralellism -- but those are restrictions within which you can design the software without understanding analog electricity itself.

    In my analysis, the skills you need as a LV programmer are no different than those you need as a programmer in any language:

    1. Know the terminology of the field you are serving. Making a pacemaker test harness? Know ventricle and aorta. Writing a word processor for news organizations? Know masthead and byline. This sort of subsumes the entire list about knowing electronics, etc, that was given earlier. When interviewing for a job that will involve X, be conversant with X.
    2. Know the people who will actually be using your software. A daily headache for them may be solved with a one node tweak in your VI. A low priority side feature to you may be acritical core use case to them. When interviewing, demonstrate that you can talk to people, and ask good questions of your interviewer. That shows you can dig for project requirements.
    3. Know the basic libraries and standard patterns of your chosen language. Don't rewrite something that already exists, and when you write new things, use the idioms that are typical for others who write that same language. When interviewing, if you're asked to demonstrate any code, make sure it is as clean as you can make it. If you're actually writing during the interview, you might not actually handle every error, but at least note out loud that it ought to be handled so you show you're aware of the situation.
    4. Know what is expensive and what is cheap in terms of the project you'll be working on. Be conversant in relative value of buying tools vs building custom. For LV, this means the classic "use your own test harness or buy someone else's". Explore what your employer's needs will be, and show that you know options.
    5. Know your own skills. If you're a hot shot LV signal processing guru who understands the fine art of minimizing error through complex calculations, don't assume that you qualify for the LV user interface job that calls for detailed XControls and 3D picture rendering. Although you want to highlight your strengths in an interview, do note your weaknesses, particularly if the employer is hiring a whole software team. You may very well still get the job, and you'll be happier because they'll hire someone whose strengths match your weaknesses, instead of someone who duplicates your skills, thus leaving a hole in the project.

  14. QUOTE (jlokanis @ Mar 12 2009, 06:41 PM)
    Yes, but if the app is continuously launching reentrant VIs dynamically, then every time it does this, it needs to allocate space for that VI and all it's sub-vis 'on the fly'. So, there is continuous memory allocation going on in this case.
    In that case, there is allocation regardless of the preallocate vs. shared clone setup. The cache of clones are only (to the best of my knowledge) shared among the subVI calls, not among the Open VI Reference calls.

  15. QUOTE (MJE @ Mar 5 2009, 10:10 AM)

    I'll nitpick here, in that it's always a problem with LabVIEW's implementation of events. Other frameworks don't necessarily behave this way. Events can be designed such that multiple signals of the same event over-write the previous signals...I can't remember what this is called.
    LV's implementation gives you the option of ignoring the events or not, as opposed to us ignoring them on your behalf -- sometimes that's a bad thing.

    Two options that I know of:

    1. You can write your code so that there is a millisecond count in a shift register. Every time you get an event, compare the current millisecond count against the count in the shift register. If the event is too soon, skip the event.
    2. Catch the event and rethrow as a different event that is handled somewhere else. As of LV 8.6, there are lossy queue primitives, so you can enqueue your event into a separate handler with the lossy behavior, so if the queue fills up, you just start dropping updates.

  16. QUOTE (jlokanis @ Mar 11 2009, 05:33 PM)

    From talking with some sources at NI, it seems that the UI thread is used for the memory manager as well as all VI server calls. So, in preallocate mode, there is a lot more memory management going on (allocating space for all those clones) and that is swamping out the core that is running the UI thread

    That doesn't make sense. In preallocate mode, the memory is all preallocated at the moment you start your program running. There is clone allocation after that point, so that shouldn't be responsible for pegging the UI thread. Let me offer a different theory...

    In the preallocate model, I agree that the one pegged thread is the UI thread. But I think what it is doing is responding to UI requests from all those other threads. The other threads have much higher performance, so they get to their UI requests more often, so the UI thread always has work to do. In the shared model, the UI thread sometimes has downtime while everyone is sharing copies around.

  17. QUOTE (MJE @ Mar 10 2009, 12:56 PM)

    If I'm not mistaken, a VI can unload from memory if the "owning" VI unloads, even if it's referenced elsewhere,
    This is wrong. A VI stays in memory as long as there are any callers of it or there are open VI references to it. You're thinking of the VI *reference* which is not the same as the VI itself. A VI reference goes stale as soon as the VI that opened that reference stops running, and when it goes stale, if it was the last reference, then the VI can leave memory. But if the VI is referenced elsewhere, the VI will stay in memory.

    QUOTE

    Do I run the same risks when dealing with class methods? That is, does the VI being owned by a class change the behavior at all?

    A class loads all of its member VIs into memory when it itself loads. After that, things get complicated, but I'm going to gloss over a whole bunch of generally not applicable situations and say "and the VIs stay around until the class leaves memory, which is after the class is no longer referenced and the last piece of class data has been deallocated." Essentially, the answer is: yes, things are different and simpler, because the member VIs load with the class and there's no way to unload the class as long as your app is still running.

  18. To implement this feature, I think your request first needs to go to the makers of the various operating systems. An app like LabVIEW would need the ability to lock down sections of the disk and prevent all other apps, including the Explorer/Finder/command line/KDE/Gnome itself, from modifying those directories OR you're going to need a much much much more efficient mechanism for notifying an application when files change in directories that the app registers itself as caring about. The auto-populate folders that exist today are pretty much bleeding edge of what we felt we could achieve with the existing operating systems.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.