Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. One thing I programmed recently was my own version of a model-view-controller architecture. It uses an action engine/producer consumer system which is certainly nothing new. Commands are sent to the model (the engine) with queues, and if the command wants a response it can supply a notifier. So what is new (to me, at least) is that the model is in an lvclass, and the queue is private, so no other modules can access the action engine without using a method from the class. Similarly the event structure in the same VI as the model also uses those methods so the front panel button presses exercise the exact same code available to other modules. The VIs inside the action engine are mostly private scope, so no other routines can invoke them outside of the engine. Once it's deployed, the front panel can remain hidden forever or else it can be shown if you want to give the end-user manual control of that system. The lvclass just contains the message queue (which is basically the controller), and the status notifier (the view), which is different from the per-action response notifier which is optional for each control message. The status notifier is updated whenever the model changes, or it can be put in a timeout case to update constantly. Other modules can call a public method which gets the latest data out of the private status notifier.
  2. QUOTE (menghuihantang @ Mar 25 2009, 05:58 AM) Laser beams are cool, but if you notice, a person with a laser pointer has a hard time keeping it from shaking. It is definitely worthwhile to think about cool and useful stuff, but sometimes things will add on rather than replace. In fact given that touchscreens have been around for 20 years or so, I suspect they've already been used for all the things they are good at and have been rejected for things like computer programming where they don't add much value. But as another example, I suspect the keyboard is never going away. It's very fast and flexible. Even if computers can understand speech, it's still easier to type "LVOOP" than try to say it. Maybe if we all have direct neural implants we could bypass keyboards, but then you might have a hard time keeping your more private thoughts private while you are trying to dump out your regular thoughts over the neural interface. Jason
  3. QUOTE (MJE @ Mar 24 2009, 01:23 PM) I think you are misreading the article: QUOTE (How LabVIEW Stores Data in Memory) String LabVIEW stores strings as pointers to a structure that contains ... If the handle, or the pointer to the structure, is NULL... QUOTE (How LabVIEW Stores Data in Memory) Variant LabVIEW stores variants as handles to a LabVIEW internal data structure. Variant data is made up of 4 bytes. As I read it, variants are a lot like strings, but with more features (a type descriptor and attributes). Also variants don't behave like by-reference objects (queues & notifiers) and they aren't represented by refnums. When you probe them, you can see the actual data. QUOTE (jlokanis @ Mar 24 2009, 01:59 PM) That may be true, but if you preview the queue, you make a copy. I suppose if you dequeue it, the element you get is just a pointer to the same block of memory that the queue was holding on to. But even that may not be true. After all, if you dequeue all elements in a queue, the queue does not free the memory it used to store those elements. A queue hangs on to all the memory it has ever allocated. So, if each element takes up 1k and you had 100 elements in the queue at one time, then the queue is still consuming 100k, even if you flush it. Hmm. It would seem to me that a queue of 100 strings would contain 100 handles pointing to some string data. Then if you flushed the queue, the 100 handles would probably remain allocated, but the actual string data they were pointing to might get released. Exactly when that release would happen should be handled by LabVIEW's memory manager and would depend on whether any other wires on any active diagrams were also pointing to those strings. Of course I'm too lazy to try to test this out, and MJE is right that it's awfully hard to tell which wires are using/sharing memory at any given time. In general, the lack of memory control and debugging tools is a feature, but I can understand that when things go wrong and memory usage rockets upward, it is hard to isolate the problem.
  4. QUOTE (LeeH @ Mar 24 2009, 09:16 AM) If you turn on http://wiki.lavag.org/Category%3a%50rivate_method' rel='nofollow' target="_blank">private methods, there is a method "VI.Export Interface" which dumps out the goodies you want.
  5. QUOTE (menghuihantang @ Mar 24 2009, 09:13 AM) Well those are certainly worth discussing, and I didn't mean to go on the attack. I am lucky that my wrist have never gotten injured after many years of LabVIEW and computers. It is my opinion that mice are overwhelmingly popular partly because they are the most effective input devices. The mouse seems to work the muscles which have the most outstanding fine motor control capability, side to side with the wrist and forward and backward with the fingers. I think it's great to discuss alternatives, but the mouse is a hard act to follow. Similarly with the keyboard. Voice activation sounds cool, but if you can type 60 words per minute or more, with good accuracy, then voice is probably not going to be an improvement.
  6. QUOTE (menghuihantang @ Mar 24 2009, 07:49 AM) Well I believe everything else you said, but not that.
  7. QUOTE (menghuihantang @ Mar 24 2009, 05:41 AM) Well typing is still pretty important in LabVIEW. If you code doesn't have labels and comments, then it's not any good. But I guess you could use a speech-to-text driver to fix that. LabVIEW programming also requires a fair amount of fine motor control (in this case "motor" refers to your hand-eye coordination and ability to make small, controlled movements with fingers and wrist. I don't think a touchscreen really adds anything to the ability to transfer those fine motions into a computer. Maybe a 3D mouse (gyration.com) would be better, but I've never tried it.
  8. QUOTE (Gary Rubin @ Mar 20 2009, 10:19 AM) Sure, but if your queue size is not changing (why would it change if you are using it like a functional global) then there should be no reallocation. In general the queues have great performance, but I don't know whether 8.x brought any performance improvement. QUOTE (Gary Rubin) This is a project that's been going on for the past 6 years or so. It is in LV7.1.1 because... All those points are reasonable. But are you still using 6-year-old computers? If it's OK to keep up-to-date with hardware, why shouldn't the same be done with software? A well-run project should take into account that regular upgrades of hardware and software are part of the development process. It's also better risk management to have a process for upgrading on a reasonable basis that to wait until it's absolutely necessary and upgrade a system that has never changed before because there is some showstopper issue. I think you can find real reasons to upgrade, and good luck with the selling that to the managers and customers.
  9. QUOTE (Gary Rubin @ Mar 20 2009, 08:59 AM) I don't see why the use of queues would add much memory allocation. If you pass a string to a queue, I don't think the string would need to be copied. Now the original wire probably requires a buffer allocation, because you might run the diagram again and get a new string, and the old string still exists in the queue, so LV will have to make a new string for the wire. But the same thing will happen if you pass your string to some other thread by any means, including a queue or an LV2 global. In LV8, there is a shipping example that has a 'singleton' pattern, which you can use to ensure that only one copy of an object ever exists, and if correctly used, should never require a memory copy. That pattern is implemented with queues. So is there a performance problem? You probably shouldn't be worrying about performance unless there is a real problem. QUOTE (Gary Rubin) Also, I seem to recall that there had been some improvement in how queues are implemented in LabVIEW 8.x. I've been trying to get us to upgrade, but nobody seems to want to spend the $. Are there queue (or other) performance issues that I might be able to point to? Well LabVIEW 7.x is at least 5 years old. I think there have been a lot of improvements, including the queues (though I thought they got their face-lift after 6.x), and of course libraries, lvclasses (LVOOP), Xcontrols, I can't remember what else. If you want to keep your LabVIEW chops up professionally, it's not good to ignore all of the new stuff. One thing that has changed is that LV 8 has real licensing and a yearly maintenance cost, which is optional, but then you don't always get the bug fix upgrades. If your company steals LabVIEW (violates the license agreement), and you upgrade, it will probably be harder to keep doing so, and more money will be spent per year. Now the cost of LabVIEW is a few thousand dollars, which is just a couple percent of the yearly cost of employing an engineer in the US. It's hard to have sympathy for a company that doesn't want to spend an appropriate amount of money on LabVIEW. You'll have to forgive me if I made assumptions about the situation at your workplace, I realize I may be off-base, and there could be real reasons for not wanting to spend money to keep the tools current.
  10. QUOTE (ejensen @ Mar 19 2009, 08:21 AM) Did you ask NI tech support? It's reasonable to ask here, but for this kind of question (how do I use the advanced features of hardware product X) you will often get the best information from NI support. Not always, since sometimes forum users have worked through a given problem already, but usually.
  11. QUOTE (jdunham @ Mar 17 2009, 09:47 PM) OK, this has been bugging me. I concede that the unflatten from file functions probably shouldn't have a bunch of validation info stuffed into the file. But when you unflatten from a string OR from a file, and the memory manager fails, couldn't we get that through the error out wire rather than through a dialog box? AQ?
  12. QUOTE (Cat @ Mar 19 2009, 04:27 AM) If you've ever put data in a variant or flattened string, and then accompanied that with an enum to tell the downstream code what kind of data is in the variant, then LVOOP makes that all much easier and cleaner.
  13. QUOTE (Ale914 @ Mar 19 2009, 05:34 AM) Well first off, I want to say that after having used queues and notifiers in a lot of different ways, I came to realize that the design of Obtain Queue is excellent. Its behavior makes a lot of different use cases possible, though I don't really have time to write more now. Maybe if I ever start that blog... What you have to understand is that a queue reference is different than a queue. A queue is a by-reference object, so that it doesn't get copied for every branch of its wire, which would be a nightmare. But the wire has to have some contents, which is a reference (a 'pointer' inasmuch as LabVIEW has pointers, which it doesn't) to the actual queue. So the Obtain function returns a queue reference. It also has automatic behavior that if the queue itself doesn't exist, it's automatically created. Occasionally you don't want that automatic behavior, so "Create if not found"=F lets you turn that off. So any time you open/obtain reference to a named queue, you get a different by-value copy of the reference which points to the same queue. The queue has the neat but sensible property that when all references to it have been release, the queue itself is destroyed. This is usually called reference-counting. Of course the Force Destroy? input is a way to get around that and destroy the queue without cleaning up all the open references. You should only use Force Destroy=T if you have a good reason. It's cool because the reference which first creates the queue doesn't have to be the one that cleans it up. The queue will exist as long as any caller needs it, and it will clean itself up automatically when the reference counting system detects that any interested callers have lost interest. So I guess what I'm saying is that Ale914 is totally right, and I tried to explain why he's right and why it's not a problem, and I'm not sure whether I've succeeded, because I don't fully understand the confusion. EDIT: So the fact that you can look up queues by name (which is useful) has almost nothing to do with the fact that there's a reference-counting system (which is also useful). The lookup system works as you'd expect, and the reference-counting is supposed to be an internal implementation detail which you never worry about and which always works as long as you always call Release once for every time you call Obtain. Even that rule can be ignored as long as you don't call the Obtain function a kajillion times without releasing (which is what the OP did). For the original poster, you probably don't need to obtain a queue reference on every call or every time through the loop. Often you can obtain the queue reference just once per thread and stuff it in a shift register (or feedback node, same thing), and then it is always available where you need it. It's still good style to close it, but even without doing so you probably wouldn't be leaking memory.
  14. QUOTE (nicolasB @ Mar 19 2009, 01:05 AM) Well what causes the memory leak is that you keep obtaining new references to the queue and you never release them. It's perfectly acceptable to look up a queue by name. It's not as fast, of course, but if the wire is unavailable, then by all means use the name, and it's still pretty darn fast. But when you are done, you need to release the reference. The queue itself will not be destroyed until all its references have been released (or until your top-level VI stops executing). Your code should generally have a Release Queue call for every Obtain Queue.
  15. QUOTE (bsvingen @ Mar 17 2009, 03:46 PM) Nope. A+B will only create a buffer if you need to keep A and B around in addition to the result. While I agree LV can get squirrely when you work with huge datasets (I realize that's exactly your concern), but not normally needing to worry about assignment or storage at all is a huge benefit. If you do a lot of work with huge datasets, you might want to buy IMAQ Vision, and treat your data as images. There are all kinds of inplace math functions and I bet they will be tons faster than native LV for million-point datasets.
  16. QUOTE (Aristos Queue @ Mar 17 2009, 08:31 PM) I think we're talking about two different things. There are true binary files, which just dump string data to a file, and there are labview binary files, which used to be called datalog files, which write arbitrary labview types in the flatten-to-string format, over which I have very little control. I thought this thread was about the latter, which have never been too useful for me, since it's too easy to render the file unreadable after the data type changes. I figured that's why the versioning was added, and it was piquing my interest. Your comments are definitely valid for true binary files, and that's what we have to use, with our own validation and metadata, since the LabVIEW formats, which could have saved us a lot of effort, were not really robust enough (and I didn't even know they could put an out of memory dialog box, that makes it that much worse. It's a real shame, because front panel datalogging could be extremely useful, UNTIL you make any changes to your front panel. Then you can never read the data again unless you can manage to reverse engineer it. It would be a real selling point if NI could fix this, except that it's not something prospective customers know is broken.
  17. QUOTE (Aristos Queue @ Mar 17 2009, 11:38 AM) String yes, file no. (in any rational world) QUOTE (Aristos Queue) Try renaming a random file as ".png" and then ask a paint program to open it. You'll get any number of strange behaviors. Both my local graphics editors put up an error: "This is not a valid PNG file." (more or less). There's no reason except for bad file format design that a proprietary format can't have a header that identifies the file type and some kind of data validation scheme. QUOTE (Aristos Queue) The trick is to save your data files with a unique file extension and then restrict your users to only picking files with that extension. So anyone can sabotage the system by feeding it an invalid file with the desired extension, and since a dialog will come up before it's possible to validate the file, there is no way to safely use the binary data file functions in an industrial application. Is my understanding correct?
  18. QUOTE (MJE @ Mar 16 2009, 08:45 PM) I remember, but I couldn't find it either. I blame the NSA. QUOTE (MJE) The question that then pops into my mind, is then why is the native behavior of the add prim such that when not operating in place and working with two arrays, the allocation goes on an input? Notice when operating in place it moves to an output. That little nugget seems to suggest to me what might be preventing the optimization of the original code. LabVIEW's pretty smart about such optimizations usually, the fact that it doesn't work in this case I find a bit surprising. Well I certainly don't understand the ins and outs of the dots' locations. But it's unsurprising to see the copy on the input of A+B without an inplace node. Since A and B have to be preserved in the output cluster, it's not possible to perform the add without a new array to hold the results, created before the add is executed. Now when the inplace frame is added, I don't konw why the buffer dot moves to the outputs. QUOTE (MJE) As an aside, I was fiddling around with this for the last twenty minutes or so and got much the same results that jdunham summarized in his image. I added another case though: I tried a couple of other things, but nothing else was fast. I guess I should upload my changes to the benchmark in case anyone else wants to play along.
  19. QUOTE (bsvingen @ Mar 16 2009, 03:44 PM) Well only if you need to keep the inputs and the result, like you are doing. LabVIEW generally reuses buffers when it is safe to do so, according to the "VI Memory Usage" chapter of the LabVIEW help manuals (and according to the vehement declarations of every LabVIEW R&D engineer I've ever met). I don't think the use of lvclasses has any effect (according to the vehement declarations of Aristos Queue). QUOTE (bsvingen @ Mar 16 2009, 03:44 PM) More often this is actually what I want (I have omitted references for c, because this has nothing to do with it): void plus(a,b,c) { c = a+b } Well in your example, when a and b are added, c is nowhere around, so how can it know to direct the output to C? [For those of you playing along at home, a, b, and c are all arrays.] If you want to control memory, you need to use the Memory Control palette. (If I remember correctly, you've been using LV 8.2 until recently, so this is a new feature). Even so, I don't think you can avoid some kind of copy if you want to preserve A and B and save the results in C. Sure you preallocated C somewhere earlier, but this diagram doesn't know that C is big enough to hold the output of A+B. Using the inplace structure helps out, but I couldn't get the swap node to make it any faster. In the diagram below, the top left diagram executed about 30% faster than your formula node, and the other three were just a hair slower than the formula node.
  20. I think Justin's approach is pretty good, and might be the fastest to implement. If you are looking for more work and higher accuracy, you could try a two step approach, creating a curve fit along each polar axis. For each radial line and each circle, use a curve fit to create a function that you can interpolate along any point (this is all available in the LabVIEW analysis libraries). You could fit the data to a polynomial or a spline, whichever seems more appropriate for your dataset. For the circles, don't forget to fit the same end point in both directions so that your fit doesn't have a terrible discontinuity where the angle rolls over. Then you would take each point in the rectangular array, and express it in polar coordinates. Then evaluate those points in your nearest fits and combine them. So for point xi, yi, compute [r(xi,yi), theta(xi,yi)], then find your nearest fits r(a), and r(a+1) and evaluate them at theta(xi,yi). Similarly, find your nearest fits theta(b) and theta(b+1), and evaluate them at r(xi,yi). Then take those four points and combine them either with a simple average or a weighted distance average like Justin showed.
  21. QUOTE (LVBeginner @ Mar 10 2009, 08:10 AM) well :xxx is the port number, which default to 80 for http. Sounds like a firewall somewhere is blocking other ports. Again, start with telnet <ip address> <port #> from the command prompt to make sure the network is working end to end. Have fun!
  22. Did you check out the UDP examples in LabVIEW? How is your 'circuit' able to receive UDP datagrams?
  23. QUOTE (LVBeginner @ Mar 10 2009, 05:53 AM) Well there's nothing wrong with a URL of the form http://192.168.1.100/myfolder/myfile.html. If you have machine names they are just converted to the numbered address sooner or later (probably sooner). If you don't know what's supposed to come after the IP address, no one on this board can help you. You have to find whoever wrote the server.
  24. QUOTE (d_nikolaos @ Mar 9 2009, 12:18 PM) Anything is possible. I recommend you post what you have tried so far (that is, attach the relevant VIs you have already written to your post), and write out a description of how it's failing to achieve your desired behavior, and then you are likely to get some more help.
  25. QUOTE (texasaggie97 @ Mar 6 2009, 01:44 PM) Thanks, that's exactly what I was looking for. I wasn't successful figure it out on ni.com, but next step was the call to the local NI rep (after asking around here, of course).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.