Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. Hi Jim, I'm sorry you have some hassles ahead of you to get re-certified, but I have to confess to feeling a bit vindicated, so thanks for sharing your experience. I totally agree with all of your points. That test is just a wee bit too hard, and I hate that saying that makes me into a whiner too. Jason
  2. A executing functions within a DLL are done within your process. You can't kill a DLL separately from the app.
  3. Sorry I don't have IMAQ loaded on my current computer, but I can give some general observations. If at all possible you should try for a hardware timed acquisition with frame buffer handling done at the driver level. With a really high frame rate like you have, it seems way too risky to attempt any kind of queue-based software buffering at the LabVIEW level. I would try to use LL Ring.vi or Buffered Ring since your goal is continuous acquisition. I don't know how your frames are synced and timed on the hardware side, and so that could affect the system, but it seems like the Ring grab should be able to give you an accurate timing of frames. It shouldn't just be dropping frames, or if it does, you should try to fix that somehow so that you can grab without any dropped frames. If you get to that point, then the frame timestamp should be a simple multiplication of the frame number and your frame rate, added to the start time. If your framesync isn't even regular, like if it is based on some detection sensor, then you have a bigger problem. In that case I would get an NI mulitfunction or counter/timer card and use the RTSI bus to share the framesync with a hardware timer, and then you can build up a buffer of timestamps associated with framesyncs. If you can get that hardware-timed ring grab working, then you should be able to set up a parallel thread to monitor the current frame number and write any new frames to disk, hopefully without converting them to 2D arrays first (which is really slow); instead using some kind of IMAQ function to save the file. At any rate, my suggestion would be to get to the point where you can rely on every frame being acquired with a known delta time, because I don't think you'll be successful measuring it in software at those high rates. Good luck, Jason P.S. It probably would have been good to start a new thread since it's a new problem.
  4. Aren't there native IMAQ functions for writing images to disk? Why don't you stream your input straight to disk, and keep a list of frame numbers and/or file names and timestamps in your queue? That gives you an index into your collection of disk image files, and you can access them in your post-processing step.
  5. We use a .NET contructor for System.Data.SqlClient.SqlConnection. All our database transactions use that. Jason
  6. Well if I can't seem to be right, I'll take interesting as a consolation prize. Yes it's polling. However, I don't see that you've abstracted it away. 'Abstracting' implies you've hidden the implementation details, which you haven't. Now if you put that in an lvlib or an lvclass and make it a private method, remove the diagrams, and then I would say my code is not polling, even if it calls yours. If I instrument the code and discover that it really is polling underneath the abstraction, I could rewrite your lvlib, maintaining the same API, and get rid of your pesky polling. Since the system can go from polling to not polling without any change in my highest-level code, it's not useful to say that my code is polling. Furthermore if you truly believe the OS and/or LabVIEW queue functions are polling, then you could pay NI 10 billion dollars or perhaps less, to furnish a new operating system with a different architecture and get rid of the polling and your queue-based code would not have to change (unless you were polling the queue status which I don't recommend). Now I don't believe this investment is necessary since I've already laid out arguments that queues are not polled, and no compelling evidence to the contrary has been presented. Didn't you already back me up on this by making some code to wait on dozens of queues and observing that the CPU usage does not increase? I don't agree. Like most CPUs, 80x86 chips have interrupt handlers, which relieves the CPU of any need to poll the I/O input. If the interrupt line does not change state, none of the instructions cascading from that I/O change are ever executed. The ability to execute those instructions based on a hardware signal is built into the CPU instruction set. I guess you could call that "hardware polling", but it doesn't consume any of the CPU time, which is what we are trying to conserve when we try not to use software polling. If you put a computer in your server rack, and don't plug in a keyboard, do you really think the CPU is wasting cycles polling the non-existent keyboard? Is the Windows Message handler really sending empty WM_KEYDOWN events to whichever app has focus? Well the answer, which Daklu mentioned in a previous post, is a little surprising. In the old days, there was a keyboard interrupt, but the newer USB system doesn't do interrupts and is polled, so at some point in time, at level way beneath my primary abstractions, the vendors decided to change the architecture of keyboard event processing from event-driven to polled. While it's now polled at the driver level, a even newer peripheral handling system could easily come along which restores a true interrupt-driven design. And this polling is surely filtered at a low level so while the device driver is in fact polling, the OS is probably not sending any spurious WM_KEYDOWN events to LabVIEW. So I suppose you could say that all my software changed from event-driven to polled at that moment I bought a USB keyboard, but I don't think that's a useful way to think about it. Maybe my next keyboard will go back to interrupt-handled. (see a related discussion at: http://stackoverflow...d-input-generic)
  7. I don't think you can abort the DLL easily, since it's running inside your own app. Killing it would be literally suicidal. You could try to break up the calculations, so that you call the DLL more often (this is easier if you could modify the DLL code). You could write another LabVIEW app to run the DLL, and use TCP or network queues to communicate between the apps. That would make the DLL app killable, but it's a lot of work considering that your stated purpose is a slight improvement of the user experience.
  8. If your VI stops at some breakpoint, and then you abort the VI, the "pause" button will often be left on. Then when you run that VI again, it will be paused and will stop at the first node to execute. So check whether the pause button in the toolbar is pressed in.
  9. Yeah, I hadn't really been paying attention and was already suspended. NI was very helpful, and I think they even tacked on another month so I could schedule the test, but after failing the exam, it didn't seem worthwhile to keep pushing and asking for favors. Well at any rate I would recommend sitting for the exam once, and then taking the course only if you fail. You don't have much to lose (except $200 and an hour of your time) by trying.
  10. Hi. Mr. Jim: I wish I had better news for you. I've been programming LabVIEW since 1994, and have freelanced to create many different systems when I was a consultant and for several years now I have been working on a large project with a few co-developers and thousands of VIs. I use projects and libraries, and have made a few XControls and have used OOP classes a bunch of times. I took the CLA-R last August to renew my CLA certification, but I didn't take any courses. I took the practice exam from NI's website and was disappointed to get a score slightly under the passing grade. There wasn't a lot of time before my deadline for the exam, but I figured I would just be more careful. I reviewed all the questions I got wrong, got a good night's sleep and ate a good breakfast. During the test, I felt I had plenty of time to review the questions which I was not certain about, and overall I would say they were similar to the sample exam. After all that I got almost the same grade as I got on the practice exam, failing by one or two questions. Since I'm not actively seeking consulting work, I figured I don't need the CLA, though when I want it again, I will have to start again from the bottom. I have to give NI credit for being able to design a test that is very difficult to pass unless you have taken their training courses. Aside from generating revenue, it fulfills its purpose of showing the people interested that the certified person has been trained and is not just a good test taker and LabVIEW hacker like me. Well good luck with the exam, and let us know how it goes. Jason
  11. I'm just talking about your link to the manual. You can tell whether your data is flat by trying to write it into a Type Cast function. It will only accept flat data types (though it doesn't seem to accept arrays with more than one dimension, even if they are flat). If an array of strings were stored contiguously, and you changed the size of any of the strings, LV would have to copy the entire array to a new memory location, so it wouldn't really make sense to store them that way.
  12. Everything you wrote seems correct. Did you have a question or an observation? Like the manual says, arrays of flat data are stored contiguously. Arrays of variably-sized data are not stored contiguously, though the array of handles to those elements is stored contiguously. If you ask LabVIEW to flatten the data, say to write it to a file in one pass, it will have to allocate a string large enough to copy the array into. There are other, more clever, ways to write a large dataset to a file. Welcome to LAVA!
  13. I would say polling is testing something in software repeatedly to detect state change rather than executing code (possibly including the same test) as a result of a state change (or the change of something else strongly coupled to it). Time is not relevant. I would also say that if polling is going on beneath some layer of abstraction (like an OS kernel), then it's not fair to say that everything built on top of that (like LabVIEW queues) is also polled. At last I would say that there could exist an implementation of LabVIEW such that one could create a queue and then the LV execution engine could wait on it without ever having to blindly test whether the queue is empty in order to decide whether to dequeue an element. That test would only need to be run once whenever the dequeue element node was called, and then once more whenever something was inserted into the queue, assuming the node was forced to wait. Given that an ignoramus like me could sketch out a possible architecture, and given that AQ has previously posted that a waiting queue consumes absolutely no CPU time, I believe that such an implementation does exist in the copy of LabVIEW I own. Pretty much all of those statements have been denied by various posters in this thread and others, though I'm sure that's because it has taken me a long time to express those ideas succinctly. I'm sort of hoping it worked this time, but extrapolating from the past doesn't give me too much confidence. Thanks for indulging me! Jason
  14. How about whenever you move more than two nodes wire segments without adding or deleting anything, a little animated broom comes up and says "It looks like you're trying to clean up your diagram..." Or maybe it could be a paper clip; I know one who's out of work right now.
  15. Sure. And it depends on what you are checking. If you are looking for new keypresses every time the timer ISR is fired, that's certainly polling. But if you only check for keypresses when the keyboard ISR fires, that's not polling. I don't think Windows polls the keyboard.
  16. Apologies for not being sufficiently buzzword-compliant. And it's about time you got here!
  17. Well I do understand, but I don't agree. We'll probably never converge, but I'll give it one more go. Repeating, "When the clock interrupt fires, ... it checks to see if a scheduled timer object has expired". This seems to you like polling, but I don't think it is. If the interrupt never fires, the timer objects will never be checked. Now since the clock is repetitive, it seems like polling, but if you cut the clock's motherboard trace, that check will never occur again, since it's not polled. If app-level code has registered some callback functions with the OS's timer objects, then those callbacks will never be called, since there is no polling, only a cascade of callbacks from an ISR (I suppose you could call that interrupt circuity 'hardware polling', but it doesn't load the CPU at all). Again, polling is one way to do it, but is not required. Not all testing is polling! You might only test the list of a clump's inputs when a new input comes in (which is the only sensible way to do it). So if a clump has 10 inputs, you would only have to test it 10 times, whether it took a millisecond or a day to accumulate the input data. I guess that's back to definitions, but if you're not running any tests, not consuming any CPU while waiting for an event (each of our 10 inputs = 1 event in this example), then you're not polling the way I would define polling. You don't have to run the test after every clump, because LV should be able to associate each clump output with all the other clump inputs to which it is connected. You only have to run the test when one of the clumps feeding your clump terminates. It makes sense that LV wouldn't poll the queues in the example because it wouldn't really help with anything. That's the watched pot that never boils. As long as you design the execution system to be able to work on a list of other clumps which are ready to run, and you can flag a clump as ready based on the results of the previous clumps, then you don't need to poll. It's sort of like... dataflow! If the LV execution thread has exhausted all the clumps, I suppose it could poll the empty list of clumps, but by the same logic it doesn't need to. The minimum process load you see LV using at idle may be entirely dedicated to performing Windows System Timer event callbacks, all driven from the hardware ISR (which seems like polling but I already tried to show that it might not be polled). If Microsoft or Apple or GNU/Linux engineers chose to simulate event behavior with a polled loop within the OS, then yes it could be, but it doesn't have to be polled. And as Steve pointed out, if there are no functions to run at all the processor will do something, but I don't think you have to call that polling, since the behavior will go away when there is real code to run. Understood. I'm enjoying the spirited debate, but it doesn't need to go on forever. I'm glad we're having it because the mysteries about queues and sleeping threads and polling are common. If the topic were silly or had easy answers, presumably someone else would have come forward by now.
  18. Ooof. Sorry you didn't pass. If it makes you feel better, I had my CLA (I was one of the original ones, I think), and then I couldn't manage to pass the recertification exam, after stubbornly refusing to take any of the training courses. Common sense will only get you so far, and apparently it's not far enough to be a CLA!
  19. Well I was hoping that AQ would have backed me up by now, but then I realized that he backed me up on this last time I tried to tell people that queues don't poll. So if you won't take my word for it, you can post a rebuttal on that other thread. Another related post in the same thread has some interesting info on thread management and parallelism. Well I didn't mean that the OS allows LabVIEW to own a hardware interrupt and service it directly. But an OS provides things like a way to register callbacks so that they are invoked on system events.The interrupt can invoke a callback (asynchronous procedure call). Do you think LabVIEW polls the keyboard too? Back in the old days, the DOS-based system timer resolution was only 55ms for the PC and LV made a big deal of using the newly-available multimedia system timer for true 1ms resolution. I think that's still the away it is, and 1ms event-driven timing is available to LabVIEW. Based on re-reading this old AQ post, I'm trying to reconcile your concept of constant-source dataflow with the clumping rules and the apparent fact that a clump boundary is caused whenever you use a node that can put the thread to sleep. It sounds like it is much harder to for LV to optimize code which crosses a clump boundary. If someone cares about optimization (which in general, they shouldn't), then worrying about where the code is put to sleep might matter more than the data sources. Overall I'm having trouble seeing the utility of your second category. Yes, your queue examples can be simplified, but it's basically a trivial case, and 99.99% of queue usage is not going to be optimizable like that so NI will probably never add that optimization. I'm not able to visualize other examples where you could have constant-source dataflow in a way that would matter. For example, you could have a global variable that is only written in one place, so it might seem like the compiler could infer a constant-source dataflow connection to all its readers. But it never could, because you could dynamically load some other VI which writes the global and breaks that constant-source relationship. So I just don't see how to apply this to any real-world programming situations.
  20. Brining this back off-topic again, I think Mercurial is pretty lightweight and you could run it from a USB drive or by temporarily dropping a few files on the hard drive of the competition computer. It seems like that ability to back out changes or restore a configuration from a previous robomatch would really be a lifesaver in this environment.
  21. Agreed on the SCC option. That other kid could have done any number of things (deletions, etc.) and saved the code, and whether or not it was intentional doesn't matter too much; it's still a risk. I use SVN too, though Mercurial seems like a better fit for this situation since all the change history for the current branch should be available even if you are disconnected from the internet. You could do that with SVN too if you run a local server. I'm sorry for the awful situation, but I don't know that disabling a toolbar button is the real answer.
  22. I don't know anything factual about the internals of labview, which is lucky for you, because if I did, I wouldn't be allowed to post this. So anyway, it's highly likely that the LV compiler generates executable clumps of code, and that each execution thread in labview is some array or FIFO queue of these clumps. The execution engine probably does a round-robin sequencing of each thread queue and executes the next clump in line. So when a clump contains a Dequeue Element function, and the data queue is empty, this clump's thread is flagged for a wait or more likely removed from some master list of active threads. Then some identifier for that thread or that clump is put in the data queue's control data structure (whatever private data LV uses to manage a queue). That part is surely true since Get Queue Status will tell you the number of "pending remove" instances which are waiting. In the meantime, that thread is removed from the list of threads allowed to execute and the engine goes off and keeps working on the other threads which are unblocked. There's no need to poll that blocked thread, because it's easier to just keep a list of unblocked threads and work on those. When data is finally enqueued, the queue manager takes the list of blocked threads and clears their flag or adds them back to the list of threads allowed to execute. No interrupts, no polling. Of course if all threads are blocked, the processor has to waste electricity somehow, so it might do some NOOP instructions (forgetting about all the other OS shenanigans going on), but you can't really call that polling the queue's thread. It's really cool that the implementation of a dataflow language can be done without a lot of polling. For the system clock/timer, that's surely a hardware interrupt, so that's code executed whenever some motherboard trace sees a rising edge, and the OS passes that to LV and then something very similar to the above happens. So that's not really polled either. OK, I had to answer this out of order, since it follows from the previous fiction I wrote above. Between each clump of code in a thread, there should be a data clump/cluster/list that contains the output wire-data from one clump to be used as the input wire-data of the next one. That's the low-level embodiment of the wire, and whether any C++ pointers were harmed in the making of it is not relevant. Now if the code clump starts off with a Dequeue function, it gets that data not from the dataflow data clump, but rather from the queue's control data structure off in the heap somewhere. It's from a global memory store, and anyone with that queue refnum can see a copy of it, rather than from the dataflow, which is private to the adjacent code clumps in the thread. Well anyway, they do undoubtedly use some pointers here so that memory doesn't have to be copied from data clump to data clump. But those pointers are still private to that thread and point to memory that is not visible to any clump that doesn't have a simple dataflow connection. I think your mental model of how the internals might work is actually getting in the way here. Yes Virginia, there *is* a wire construct in the execution environment. I grant that my mental model could be wrong too (AQ is probably ROTFLHAO at this point), but hopefully you can see why I think real dataflow is as simple as it looks on a diagram. Well we're not getting any closer on this. I still think that other stuff is not pure dataflow, and 'dataflow' is a very useful word. If you say that every iota of LabVIEW is dataflow, then the only place that word is useful is in the marketing literature, and I'm not willing to cede it to them. Maybe the key is adding a modifier, like 'pure', 'simple', or 'explicit' dataflow. Hey I'm learning too, and by the way I really want to try out the Lapdog stuff. I may come back to you for help on that.
  23. Well it still seems sort of academic. The examples you show don't seem suitable for compiler optimization into "simple dataflow" constructions. In most non-trivial implementations you might end up with one of the queue functions in a subvi. Whether or not your queue is named, once you let that refnum touch a control or indicator, you are not deterministic. It's not that hard to write a dynamic VI which could use VI Server calls to scrape that refnum and start calling more queue functions against it. I know you're just trying to find an example, but that one doesn't go too far. I didn't quite get this The subvis will release their references to the queue, but they will still enqueue their elements first, and it will be as deterministic as any of your other examples. The queue refnums may be different, but they all point to the same queue, which will continue to exist until all references to it are destroyed or are automatically released when their callers go out of scope. Well, yes it's kind of murky, but the time functions are basically I/O ('I' in this case). They return the current state of an external quantity, injecting it into the dataflow. All I/O calls represent a local boundary of the dataflow world. In contrast, the queue functions (and globals, locals, etc.) transfer data from one labview dataflow wire to a totally unconnected labview dataflow wire. So I think that supports my saying that these break dataflow, while the timing functions don't. BTW, I don't believe the queues poll. I'm pretty sure AQ has said those threads "go to sleep" and it only took me a minute or two to figure out how to implement the wakeup without any polling. The NI guys are way smarter than me, so they probably figured it out too. Except when I call the strict by-value/deterministic stuff "pure dataflow" to my co-workers, they immediately understand what I am talking about, whereas I would constantly get quizzical looks if I switched over to saying "by-value construction" (even though they understand those words). Anyway, I'm fine with using your definitions within the scope of this thread, assuming I bore you all yet again with any more replies. Oh gosh, I found that disturbing. I think it was a mistake on NI's part to allow this in the language syntax.
  24. In your position, I think you should buy an off-the-shelf solution. Why do you expect us to do this work for you when you can just purchase such a system? http://www.eaglevision1.com/license-plate-recognition.htm Of course if this is educational, you're going to have to do the majority of the work yourself and show the work you've done so far before asking specific questions about how to reach the next step. Good luck!
  25. Well I just posted, and I feel like I didn't adequately answer your real question, so I'll try again. I would say that the Dequeue Element node is a special node that fires not when its input data is valid, but rather when an event occurs, like data is enqueued somewhere, or the queue is destroyed. So sure, technically it fires right away, and its function is to wait for the event, so it "starts executing" on the valid inputs but then it sits around doing nothing until some non-dataflow-linked part of your code drops an element into the queue. So that node (Dequeue Element) is executed under dataflow rules, like all LabVIEW nodes are, but what goes on inside that node is a non-dataflow operation, at least the way I see it. It's "event-driven" rather than "dataflow-driven" inside the node. Similarly a refnum is a piece of data that can and should be handled with dataflow, but the fact that a refnum points to some other object is a non-dataflow component of the LabVIEW language (we're not still calling it 'G', are we?).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.