jdunham
Members-
Posts
625 -
Joined
-
Last visited
-
Days Won
6
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by jdunham
-
Not for technical reasons, to be sure. It's a business decision on the part of National Instruments. You are absolutely right that some sort of textual representation could be implemented. However once diagrams were converted to an equivalent textual representation, it would be fairly easy to create other IDEs to edit those files or at least read and write the common format. It would also be easy enough to create a different execution system to read those text files and run them as a program, without having to reverse-engineer NI's patent-protected technology. I think this would vastly increase the popularity of LabVIEW, but I'm not sure NI would be able to monetize that growth. These patents are the property of NI and it is their right to do as they please and their obligation to try to benefit their shareholders (including me holding a piddly amount of them). I'm quite sure they have had this debate internally. So you can add this to the NI Wish List, and collect all the kudos you want, but don't hold your breath.
-
They are a bit weird to work with, but you might consider subpanels. Then you can have your graph in the subvi, which is great for debugging, and show the panel in the main VI with no extra programming. However if your data is coming from different acquisition VIs and aggregated in the graph it may be the right solution.
-
Hi Jim, I'm sorry you have some hassles ahead of you to get re-certified, but I have to confess to feeling a bit vindicated, so thanks for sharing your experience. I totally agree with all of your points. That test is just a wee bit too hard, and I hate that saying that makes me into a whiner too. Jason
-
A executing functions within a DLL are done within your process. You can't kill a DLL separately from the app.
-
bug Bug: Format Variant Into String__ogtk.vi doesn't handle Timestamps
jdunham replied to jed's topic in OpenG General Discussions
Thanks! We use some other custom "format anything" VIs and we were handling the timestamps with custom code. Thanks for pointing me at the new VI. -
bug Bug: Format Variant Into String__ogtk.vi doesn't handle Timestamps
jdunham replied to jed's topic in OpenG General Discussions
What's the NI Equivalent for Format Variant Into String__ogtk.vi? -
grab at high frame rates and save
jdunham replied to dispossessed's topic in Machine Vision and Imaging
Sorry I don't have IMAQ loaded on my current computer, but I can give some general observations. If at all possible you should try for a hardware timed acquisition with frame buffer handling done at the driver level. With a really high frame rate like you have, it seems way too risky to attempt any kind of queue-based software buffering at the LabVIEW level. I would try to use LL Ring.vi or Buffered Ring since your goal is continuous acquisition. I don't know how your frames are synced and timed on the hardware side, and so that could affect the system, but it seems like the Ring grab should be able to give you an accurate timing of frames. It shouldn't just be dropping frames, or if it does, you should try to fix that somehow so that you can grab without any dropped frames. If you get to that point, then the frame timestamp should be a simple multiplication of the frame number and your frame rate, added to the start time. If your framesync isn't even regular, like if it is based on some detection sensor, then you have a bigger problem. In that case I would get an NI mulitfunction or counter/timer card and use the RTSI bus to share the framesync with a hardware timer, and then you can build up a buffer of timestamps associated with framesyncs. If you can get that hardware-timed ring grab working, then you should be able to set up a parallel thread to monitor the current frame number and write any new frames to disk, hopefully without converting them to 2D arrays first (which is really slow); instead using some kind of IMAQ function to save the file. At any rate, my suggestion would be to get to the point where you can rely on every frame being acquired with a known delta time, because I don't think you'll be successful measuring it in software at those high rates. Good luck, Jason P.S. It probably would have been good to start a new thread since it's a new problem. -
grab at high frame rates and save
jdunham replied to dispossessed's topic in Machine Vision and Imaging
Aren't there native IMAQ functions for writing images to disk? Why don't you stream your input straight to disk, and keep a list of frame numbers and/or file names and timestamps in your queue? That gives you an index into your collection of disk image files, and you can access them in your post-processing step. -
We use a .NET contructor for System.Data.SqlClient.SqlConnection. All our database transactions use that. Jason
-
Well if I can't seem to be right, I'll take interesting as a consolation prize. Yes it's polling. However, I don't see that you've abstracted it away. 'Abstracting' implies you've hidden the implementation details, which you haven't. Now if you put that in an lvlib or an lvclass and make it a private method, remove the diagrams, and then I would say my code is not polling, even if it calls yours. If I instrument the code and discover that it really is polling underneath the abstraction, I could rewrite your lvlib, maintaining the same API, and get rid of your pesky polling. Since the system can go from polling to not polling without any change in my highest-level code, it's not useful to say that my code is polling. Furthermore if you truly believe the OS and/or LabVIEW queue functions are polling, then you could pay NI 10 billion dollars or perhaps less, to furnish a new operating system with a different architecture and get rid of the polling and your queue-based code would not have to change (unless you were polling the queue status which I don't recommend). Now I don't believe this investment is necessary since I've already laid out arguments that queues are not polled, and no compelling evidence to the contrary has been presented. Didn't you already back me up on this by making some code to wait on dozens of queues and observing that the CPU usage does not increase? I don't agree. Like most CPUs, 80x86 chips have interrupt handlers, which relieves the CPU of any need to poll the I/O input. If the interrupt line does not change state, none of the instructions cascading from that I/O change are ever executed. The ability to execute those instructions based on a hardware signal is built into the CPU instruction set. I guess you could call that "hardware polling", but it doesn't consume any of the CPU time, which is what we are trying to conserve when we try not to use software polling. If you put a computer in your server rack, and don't plug in a keyboard, do you really think the CPU is wasting cycles polling the non-existent keyboard? Is the Windows Message handler really sending empty WM_KEYDOWN events to whichever app has focus? Well the answer, which Daklu mentioned in a previous post, is a little surprising. In the old days, there was a keyboard interrupt, but the newer USB system doesn't do interrupts and is polled, so at some point in time, at level way beneath my primary abstractions, the vendors decided to change the architecture of keyboard event processing from event-driven to polled. While it's now polled at the driver level, a even newer peripheral handling system could easily come along which restores a true interrupt-driven design. And this polling is surely filtered at a low level so while the device driver is in fact polling, the OS is probably not sending any spurious WM_KEYDOWN events to LabVIEW. So I suppose you could say that all my software changed from event-driven to polled at that moment I bought a USB keyboard, but I don't think that's a useful way to think about it. Maybe my next keyboard will go back to interrupt-handled. (see a related discussion at: http://stackoverflow...d-input-generic)
-
I don't think you can abort the DLL easily, since it's running inside your own app. Killing it would be literally suicidal. You could try to break up the calculations, so that you call the DLL more often (this is easier if you could modify the DLL code). You could write another LabVIEW app to run the DLL, and use TCP or network queues to communicate between the apps. That would make the DLL app killable, but it's a lot of work considering that your stated purpose is a slight improvement of the user experience.
-
If your VI stops at some breakpoint, and then you abort the VI, the "pause" button will often be left on. Then when you run that VI again, it will be paused and will stop at the first node to execute. So check whether the pause button in the toolbar is pressed in.
-
Yeah, I hadn't really been paying attention and was already suspended. NI was very helpful, and I think they even tacked on another month so I could schedule the test, but after failing the exam, it didn't seem worthwhile to keep pushing and asking for favors. Well at any rate I would recommend sitting for the exam once, and then taking the course only if you fail. You don't have much to lose (except $200 and an hour of your time) by trying.
-
Hi. Mr. Jim: I wish I had better news for you. I've been programming LabVIEW since 1994, and have freelanced to create many different systems when I was a consultant and for several years now I have been working on a large project with a few co-developers and thousands of VIs. I use projects and libraries, and have made a few XControls and have used OOP classes a bunch of times. I took the CLA-R last August to renew my CLA certification, but I didn't take any courses. I took the practice exam from NI's website and was disappointed to get a score slightly under the passing grade. There wasn't a lot of time before my deadline for the exam, but I figured I would just be more careful. I reviewed all the questions I got wrong, got a good night's sleep and ate a good breakfast. During the test, I felt I had plenty of time to review the questions which I was not certain about, and overall I would say they were similar to the sample exam. After all that I got almost the same grade as I got on the practice exam, failing by one or two questions. Since I'm not actively seeking consulting work, I figured I don't need the CLA, though when I want it again, I will have to start again from the bottom. I have to give NI credit for being able to design a test that is very difficult to pass unless you have taken their training courses. Aside from generating revenue, it fulfills its purpose of showing the people interested that the certified person has been trained and is not just a good test taker and LabVIEW hacker like me. Well good luck with the exam, and let us know how it goes. Jason
-
I'm just talking about your link to the manual. You can tell whether your data is flat by trying to write it into a Type Cast function. It will only accept flat data types (though it doesn't seem to accept arrays with more than one dimension, even if they are flat). If an array of strings were stored contiguously, and you changed the size of any of the strings, LV would have to copy the entire array to a new memory location, so it wouldn't really make sense to store them that way.
-
Everything you wrote seems correct. Did you have a question or an observation? Like the manual says, arrays of flat data are stored contiguously. Arrays of variably-sized data are not stored contiguously, though the array of handles to those elements is stored contiguously. If you ask LabVIEW to flatten the data, say to write it to a file in one pass, it will have to allocate a string large enough to copy the array into. There are other, more clever, ways to write a large dataset to a file. Welcome to LAVA!
-
I would say polling is testing something in software repeatedly to detect state change rather than executing code (possibly including the same test) as a result of a state change (or the change of something else strongly coupled to it). Time is not relevant. I would also say that if polling is going on beneath some layer of abstraction (like an OS kernel), then it's not fair to say that everything built on top of that (like LabVIEW queues) is also polled. At last I would say that there could exist an implementation of LabVIEW such that one could create a queue and then the LV execution engine could wait on it without ever having to blindly test whether the queue is empty in order to decide whether to dequeue an element. That test would only need to be run once whenever the dequeue element node was called, and then once more whenever something was inserted into the queue, assuming the node was forced to wait. Given that an ignoramus like me could sketch out a possible architecture, and given that AQ has previously posted that a waiting queue consumes absolutely no CPU time, I believe that such an implementation does exist in the copy of LabVIEW I own. Pretty much all of those statements have been denied by various posters in this thread and others, though I'm sure that's because it has taken me a long time to express those ideas succinctly. I'm sort of hoping it worked this time, but extrapolating from the past doesn't give me too much confidence. Thanks for indulging me! Jason
-
Can I disable the Clean Up Diagram feature?
jdunham replied to xtal's topic in Development Environment (IDE)
How about whenever you move more than two nodes wire segments without adding or deleting anything, a little animated broom comes up and says "It looks like you're trying to clean up your diagram..." Or maybe it could be a paper clip; I know one who's out of work right now. -
Sure. And it depends on what you are checking. If you are looking for new keypresses every time the timer ISR is fired, that's certainly polling. But if you only check for keypresses when the keyboard ISR fires, that's not polling. I don't think Windows polls the keyboard.
-
Apologies for not being sufficiently buzzword-compliant. And it's about time you got here!
-
Well I do understand, but I don't agree. We'll probably never converge, but I'll give it one more go. Repeating, "When the clock interrupt fires, ... it checks to see if a scheduled timer object has expired". This seems to you like polling, but I don't think it is. If the interrupt never fires, the timer objects will never be checked. Now since the clock is repetitive, it seems like polling, but if you cut the clock's motherboard trace, that check will never occur again, since it's not polled. If app-level code has registered some callback functions with the OS's timer objects, then those callbacks will never be called, since there is no polling, only a cascade of callbacks from an ISR (I suppose you could call that interrupt circuity 'hardware polling', but it doesn't load the CPU at all). Again, polling is one way to do it, but is not required. Not all testing is polling! You might only test the list of a clump's inputs when a new input comes in (which is the only sensible way to do it). So if a clump has 10 inputs, you would only have to test it 10 times, whether it took a millisecond or a day to accumulate the input data. I guess that's back to definitions, but if you're not running any tests, not consuming any CPU while waiting for an event (each of our 10 inputs = 1 event in this example), then you're not polling the way I would define polling. You don't have to run the test after every clump, because LV should be able to associate each clump output with all the other clump inputs to which it is connected. You only have to run the test when one of the clumps feeding your clump terminates. It makes sense that LV wouldn't poll the queues in the example because it wouldn't really help with anything. That's the watched pot that never boils. As long as you design the execution system to be able to work on a list of other clumps which are ready to run, and you can flag a clump as ready based on the results of the previous clumps, then you don't need to poll. It's sort of like... dataflow! If the LV execution thread has exhausted all the clumps, I suppose it could poll the empty list of clumps, but by the same logic it doesn't need to. The minimum process load you see LV using at idle may be entirely dedicated to performing Windows System Timer event callbacks, all driven from the hardware ISR (which seems like polling but I already tried to show that it might not be polled). If Microsoft or Apple or GNU/Linux engineers chose to simulate event behavior with a polled loop within the OS, then yes it could be, but it doesn't have to be polled. And as Steve pointed out, if there are no functions to run at all the processor will do something, but I don't think you have to call that polling, since the behavior will go away when there is real code to run. Understood. I'm enjoying the spirited debate, but it doesn't need to go on forever. I'm glad we're having it because the mysteries about queues and sleeping threads and polling are common. If the topic were silly or had easy answers, presumably someone else would have come forward by now.
-
Ooof. Sorry you didn't pass. If it makes you feel better, I had my CLA (I was one of the original ones, I think), and then I couldn't manage to pass the recertification exam, after stubbornly refusing to take any of the training courses. Common sense will only get you so far, and apparently it's not far enough to be a CLA!
-
Well I was hoping that AQ would have backed me up by now, but then I realized that he backed me up on this last time I tried to tell people that queues don't poll. So if you won't take my word for it, you can post a rebuttal on that other thread. Another related post in the same thread has some interesting info on thread management and parallelism. Well I didn't mean that the OS allows LabVIEW to own a hardware interrupt and service it directly. But an OS provides things like a way to register callbacks so that they are invoked on system events.The interrupt can invoke a callback (asynchronous procedure call). Do you think LabVIEW polls the keyboard too? Back in the old days, the DOS-based system timer resolution was only 55ms for the PC and LV made a big deal of using the newly-available multimedia system timer for true 1ms resolution. I think that's still the away it is, and 1ms event-driven timing is available to LabVIEW. Based on re-reading this old AQ post, I'm trying to reconcile your concept of constant-source dataflow with the clumping rules and the apparent fact that a clump boundary is caused whenever you use a node that can put the thread to sleep. It sounds like it is much harder to for LV to optimize code which crosses a clump boundary. If someone cares about optimization (which in general, they shouldn't), then worrying about where the code is put to sleep might matter more than the data sources. Overall I'm having trouble seeing the utility of your second category. Yes, your queue examples can be simplified, but it's basically a trivial case, and 99.99% of queue usage is not going to be optimizable like that so NI will probably never add that optimization. I'm not able to visualize other examples where you could have constant-source dataflow in a way that would matter. For example, you could have a global variable that is only written in one place, so it might seem like the compiler could infer a constant-source dataflow connection to all its readers. But it never could, because you could dynamically load some other VI which writes the global and breaks that constant-source relationship. So I just don't see how to apply this to any real-world programming situations.
-
Can I disable the Clean Up Diagram feature?
jdunham replied to xtal's topic in Development Environment (IDE)
Brining this back off-topic again, I think Mercurial is pretty lightweight and you could run it from a USB drive or by temporarily dropping a few files on the hard drive of the competition computer. It seems like that ability to back out changes or restore a configuration from a previous robomatch would really be a lifesaver in this environment. -
Can I disable the Clean Up Diagram feature?
jdunham replied to xtal's topic in Development Environment (IDE)
Agreed on the SCC option. That other kid could have done any number of things (deletions, etc.) and saved the code, and whether or not it was intentional doesn't matter too much; it's still a risk. I use SVN too, though Mercurial seems like a better fit for this situation since all the change history for the current branch should be available even if you are disconnected from the internet. You could do that with SVN too if you run a local server. I'm sorry for the awful situation, but I don't know that disabling a toolbar button is the real answer.