Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. QUOTE (Gary Rubin @ Mar 17 2009, 11:51 AM) That's right. *LabVIEW* is what makes you qualified to act as a Software Engineer. Right? :laugh:
  2. QUOTE (Jeffrey Habets @ Mar 17 2009, 09:14 AM) When LV is told to unflatten a string, we do our best to interpret it as the data type you claim it to be. If you flatten an eight-byte double as a string, then tell us to unflatten that string as a 4-byte integer, we're going to read the first four bytes. On the other hand, if you flatten a double and try to unflatten it as a string, we're going to treat the first four bytes of that data as the length of the string. Since this is likely a VERY large number, we will then try to allocate an array of that size, and we often run out of memory trying to do that. So depending upon exactly what you are flattening and unflattening, you may get the more helpful "data corrupt" errors, or you may get the "out of memory" errors. Pot luck depending on how close the data matches something that is parsable. It's not a bug -- it is LV doing exactly what you told it to do. This behavior applies regardless of the data type you're unflattening, including LV classes. And it is not unique to LabVIEW. Try renaming a random file as ".png" and then ask a paint program to open it. You'll get any number of strange behaviors. The trick is to save your data files with a unique file extension and then restrict your users to only picking files with that extension.
  3. Check your print settings. Is 8.6 searching for a non-existent remote printer?
  4. QUOTE (Neville D @ Mar 16 2009, 12:12 PM) Contrast: If the project is "our company maintains public fountains; we need a device that we can trigger remotely to drain the fountain and then drive around picking up coins that people have thrown in", I may decide that I need an iPhone app that controls a Roomba platform. That's a very different type of hardware decision -- and one that may have been made before you were even hired. I know of a few projects where the hardware stack was in place and then they brought in the software engineer to make it all work out. If the job includes solder and screws, you need to know hardware details. But if you're a software engineer, the need to know hardware decreases as you move from where your software drives the hardware to where your software controls the hardware, yet both are still within the typical domain of LabVIEW. If you aren't hardware savvy, that's where to be looking for job opportunities.
  5. QUOTE (JohnRH @ Mar 16 2009, 05:29 AM) A true pity. Of the items on your list, I would only hit #5, and you might consider me "overqualified" as far as basic math is concerned, and maybe #1. QUOTE 1) solid understanding of computers and networking 2) basic understanding of serial communication protocols (RS232/485) 3) electronics! (at LEAST enough to design basic DAQ setups) 4) comfortable using an oscilloscope 5) basic math (calculus and statistics) But I have worked with various hardware teams over the years, and I've discovered that the further you get from signal processing and the closer you get to industrial control, the less you need to understand the hardware. Once you have an API that provides control over a motor and another API that acquires a picture from a camera, then its all math and software to figure out how to spin the motor such that the robot arm moves to a specific spot in the image and picks up the target object. You still need to understand the limitations that the hardware places on the software -- memory limits, data type restrictions, processor speed, available paralellism -- but those are restrictions within which you can design the software without understanding analog electricity itself. In my analysis, the skills you need as a LV programmer are no different than those you need as a programmer in any language: Know the terminology of the field you are serving. Making a pacemaker test harness? Know ventricle and aorta. Writing a word processor for news organizations? Know masthead and byline. This sort of subsumes the entire list about knowing electronics, etc, that was given earlier. When interviewing for a job that will involve X, be conversant with X. Know the people who will actually be using your software. A daily headache for them may be solved with a one node tweak in your VI. A low priority side feature to you may be acritical core use case to them. When interviewing, demonstrate that you can talk to people, and ask good questions of your interviewer. That shows you can dig for project requirements. Know the basic libraries and standard patterns of your chosen language. Don't rewrite something that already exists, and when you write new things, use the idioms that are typical for others who write that same language. When interviewing, if you're asked to demonstrate any code, make sure it is as clean as you can make it. If you're actually writing during the interview, you might not actually handle every error, but at least note out loud that it ought to be handled so you show you're aware of the situation. Know what is expensive and what is cheap in terms of the project you'll be working on. Be conversant in relative value of buying tools vs building custom. For LV, this means the classic "use your own test harness or buy someone else's". Explore what your employer's needs will be, and show that you know options. Know your own skills. If you're a hot shot LV signal processing guru who understands the fine art of minimizing error through complex calculations, don't assume that you qualify for the LV user interface job that calls for detailed XControls and 3D picture rendering. Although you want to highlight your strengths in an interview, do note your weaknesses, particularly if the employer is hiring a whole software team. You may very well still get the job, and you'll be happier because they'll hire someone whose strengths match your weaknesses, instead of someone who duplicates your skills, thus leaving a hole in the project.
  6. QUOTE (jlokanis @ Mar 12 2009, 06:41 PM) In that case, there is allocation regardless of the preallocate vs. shared clone setup. The cache of clones are only (to the best of my knowledge) shared among the subVI calls, not among the Open VI Reference calls.
  7. QUOTE (MJE @ Mar 5 2009, 10:10 AM) LV's implementation gives you the option of ignoring the events or not, as opposed to us ignoring them on your behalf -- sometimes that's a bad thing. Two options that I know of: You can write your code so that there is a millisecond count in a shift register. Every time you get an event, compare the current millisecond count against the count in the shift register. If the event is too soon, skip the event. Catch the event and rethrow as a different event that is handled somewhere else. As of LV 8.6, there are lossy queue primitives, so you can enqueue your event into a separate handler with the lossy behavior, so if the queue fills up, you just start dropping updates.
  8. QUOTE (jlokanis @ Mar 11 2009, 05:33 PM) That doesn't make sense. In preallocate mode, the memory is all preallocated at the moment you start your program running. There is clone allocation after that point, so that shouldn't be responsible for pegging the UI thread. Let me offer a different theory... In the preallocate model, I agree that the one pegged thread is the UI thread. But I think what it is doing is responding to UI requests from all those other threads. The other threads have much higher performance, so they get to their UI requests more often, so the UI thread always has work to do. In the shared model, the UI thread sometimes has downtime while everyone is sharing copies around.
  9. That would be an edit to the VI, so even if there is a way to programmatically set it, it would require scripting.
  10. QUOTE (MJE @ Mar 10 2009, 12:56 PM) This is wrong. A VI stays in memory as long as there are any callers of it or there are open VI references to it. You're thinking of the VI *reference* which is not the same as the VI itself. A VI reference goes stale as soon as the VI that opened that reference stops running, and when it goes stale, if it was the last reference, then the VI can leave memory. But if the VI is referenced elsewhere, the VI will stay in memory.QUOTE Do I run the same risks when dealing with class methods? That is, does the VI being owned by a class change the behavior at all? A class loads all of its member VIs into memory when it itself loads. After that, things get complicated, but I'm going to gloss over a whole bunch of generally not applicable situations and say "and the VIs stay around until the class leaves memory, which is after the class is no longer referenced and the last piece of class data has been deallocated." Essentially, the answer is: yes, things are different and simpler, because the member VIs load with the class and there's no way to unload the class as long as your app is still running.
  11. Here's a far simpler explanation: Pop up on the Refnum control and select "Show Control". That's the control that is being talked about.
  12. To implement this feature, I think your request first needs to go to the makers of the various operating systems. An app like LabVIEW would need the ability to lock down sections of the disk and prevent all other apps, including the Explorer/Finder/command line/KDE/Gnome itself, from modifying those directories OR you're going to need a much much much more efficient mechanism for notifying an application when files change in directories that the app registers itself as caring about. The auto-populate folders that exist today are pretty much bleeding edge of what we felt we could achieve with the existing operating systems.
  13. QUOTE (jgcode @ Mar 5 2009, 04:44 PM) I am saying that I would never do this in the real world and I would strongly advise anyone else against ever doing it. It opens the door to your child object (and all of its descendants) being in an inconsistent state that their designers may never have handled. It can be used to hack around a poorly designed parent class. The one case I am most familiar with, making a direct call to grandparent functionality (bypassing the parent) did let a software team ship a product on time, but that hack bit them badly in the next release when they didn't go back to refactor the code because a new descendant class was added that assumed (rightfully) that the functionality of the parent (which was being bypassed) would be invoked.Essentially, the situation was this Grandparent Parent Child Each level of the hierarchy had an implementation of RegisterMe(). Parent's version registered the object with a framework. Child had its own overriding implementation of RegisterMe() did some checking of itself and if certain flags were set, the function would return an error; if those flags were not set, the function would call up to the Parent implementation to do the registration. There was one place in the code where the correct behavior was to do the registration of a Child object even though the flags were set on Child that would normally make the function return an error. Refactoring the code at that point was hard -- they would have had to change some interfaces that had been stabilized. So the programmers called directly up to Parent's version, bypassing the flag check. By just bypassing Parent, the team made Child work correctly. Fine -- they shipped. Next version, a new Grandchild class was introduced. Grandchild overrode RegisterMe() such that it called up to Child's implementation. If Child returned an error, so did Grandchild. But if Child did not return an error, Grandchild did some more work, including registering itself with a second framework. Grandchild assumed that anytime it was registered with the first framework, it would also be registered with the second framework. It assumed that the code protected it from ever getting into an inconsistent state. The problem was that a Grandchild object got passed to that special section of code that called directly to Parent:RegisterMe(). That registered Grandchild with the first framework but not the second. Grandchild's desgin predicate was violated. No one noticed this bug until after release of the new version... the first to notice was a customer. Serious bug... required a custom patch for the customer. Expensive. Object-oriented design is supposed to prevent design errors like that. The whole point is that there are predicates that each class defines: I am in state X. I define a set of functions that let me transition from state X to state X+1. I have no functions that ever let me reach state X+2 without going through state X+1. I have no functions that put me in state Y EVER. Therefore I don't have to check for state Y, and I can assume that things done in state X+1 are taken care of when I am in X+2." With these predicates, the software developer can actually make logical arguments about the correctness of his/her code. This shield is one that I took great pains to maintain in the design of LabVIEW classes. We eliminated many of the aspects of other OO languages that keep you from asserting certain truths. You cannot have two VIs of the same name that do not override each other by having different connector panes. You cannot directly call an ancestor implementation of a method You cannot as a user directly edit the private data of a class through any UI mechanism [Yes, I know this causes a problem for debugging and we're still working on that problem, but don't expect anything soon. But while it is a problem when debugging, for a running app this is a feature.] You cannot have public or protected data [This one we could relax and it would be your choice when to have public data about which you could make zero assertions of correctness, but the reasons for all private data are explained in the LabVOOP white paper.] I feel this makes LabVIEW a more robust language, something that is important in all the industrial control and hardware feedback situations that LabVIEW is used for.
  14. QUOTE (Mark Yedinak @ Mar 5 2009, 09:47 AM) I agree with the goal of cleaner looking code. I just don't buy that something that looks like a wire but isn't and something that doesn't look like a terminal but is contributes to the cleanliness of the diagram, especially when the functionality, as you noted, is available today through existing mechanisms. Going further, your primary case is doing timing on a block of code, having to drop down a flat sequence structure. I think that is extremely clean code. It clearly identifies what code is included in the timing AND it is instantly recognizable visually that you are doing timing ("I see Get Milliseconds in one frame, then a frame of code, then Get Milliseconds in the final frame. Oh, that's a benchmark pattern.") I don't see how having a NULL wire that could wander all over the place -- including branching off to stuff that is not included in the timing -- would be any cleaner.
  15. No, you cannot do what you are seeking to do. It is impossible, by design, and if you find a way to do it, please let me know so we can fix it. Now, having said that... What you claim you are trying to do is this: The parent defines functionality X. The child overrides functionality X. You want to have a child object call Parent:X. But you would never really want to do this. Doing so completely violates the definition of child -- child overrode the behavior of X for some reason, which may be because the parent behavior is invalid or doesn't do sufficent input checking, or doesn't keep related fields up-to-date... etc. What you actually want to do is this: The parent defines functionality X. The parent defines functionality Y, which happens to be identical to X. (So it is probably implemented as Y calls X in the parent, but that's private implementation detail, so we don't know... the parent could have duplicated the VI, or call into a common subVI...) The child overrides functionality Y. You have a child object call functionality X. In this case, the parent has exposed both the wrapper layer and the core layer -- and exposed it as two separate methods even though itself doesn't need to have any difference between these. That's how you'll handle the JAVA code.
  16. QUOTE (jdunham @ Mar 4 2009, 02:50 AM) At the very least, you should get 8.2.1. There are seven major bugs fixed in 8.2.1. They range from annoying UI delays to files saving corrupt.
  17. There is a VI Analyzer test you can use to analyze VIs on disk to see if they call a given subVI: http://forums.ni.com/ni/board/message?boar...ssage.id=365628 This can search inside .llb files. I don't know what version of LV the earliest VI Analyzer was available.
  18. The original .vit may have been saved thinking it is part of the .lvlib even though the .lvlib was saved with no reference to the .vit. Open the .vit itself in the LV editor and see if it thinks it is a member of the library. If it does, use "File >> Disconnect from Library" to separate the .vit from the .lvlib and then save the .vit. Then try your code again.
  19. Assuming that you have stayed entirely by-value and don't have anything that is by reference, you should be able to 1. Drop the "Flatten To String" node 2. Wire your "Chess Game" object to the data input. (Or the datalog write, as suggested earlier, or the XML write) For read, use the Unflatten From String node, with your Chess Game as the input type.
  20. > However in general sequence frames are not very desirable for many reasons. One of the reasons they are desirable is when there is a need for an order dependency and there isn't a dataflow dependency. As a matter of fact, this is exactly the *right* time to be using a sequence structure. > So in order to time a task we generally have to place a flat sequence frame down > with the Get Time in the first and third frames and our task in the second one. And we like this because it makes for very legible timing diagrams. But, if you want to use less diagram space, put a sequence structure around the first timeing node, another around the second timing node, and thread the inputs to your code through the first structure and the outputs through the second structure. It's just a bit harder to identify exactly what nodes are being timed.
  21. I recently placed an idea in front of the LV R&D team to think about: Shouldn't error terminals be moved -- on all nodes -- to be the top terminal of all nodes? That is the one consistent location that you could then string an error wire across between functions, regardless of type. Then if you have a second type that is railroading along side the error code, have it running in the second-from-the-top terminal. This means that you have one side of your node that you can get values to the other terminals without crossing your two railroad track wires, and your nodes can stay top aligned to minimize bends in the error wire. Yeah, it'd be a big change to the diagram to do it today. But if we didn't have 20 years of diagrams, wouldn't that be a better strategy?
  22. QUOTE (joey braem @ Feb 23 2009, 04:24 PM) Not going to happen. The VI Analyzer is intended to look at block diagrams. The block diagrams do not exist in the runtime engine. And a lot of the "reflection" API for the connector pane, etc, on which the VI Analyzer relies, is also non-existant. The RT engine is for running VIs, not editing VIs. The VI Analyzer is part of the editor environment.
  23. Quick addition to crelf's clarification: QUOTE (crelf @ Feb 23 2009, 12:48 PM) The latest edition of the "GOOP Toolkit" from Endevo is built out of LVOOP classes, and the toolkit includes tools for manipulating both GOOP classes and LVOOP classes. We now return you to your irregularly-scheduled return to your regularly-scheduled programme.
  24. QUOTE (Jim Kring @ Feb 22 2009, 06:54 PM) As a workaround to that, rename the class file, load the typedef, edit it, then rename the class back to its original name and open it. This shouldn't be necessary, but since you're working around the other bug, this workaround becomes useful.
  25. Run a VI such that a non-default value ends up in an indicator. Right click on that indicator and choose "Make Current Value Default".
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.