Jump to content

Mark Yedinak

Members
  • Posts

    429
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Mark Yedinak

  1. I have never encountered any performance issues when using queues. Also, NI has done a good job of optimizing them. One nice feature in 8.6 is the ability to have a circular queue. That is, if you define a maximum size old elements are dropped once you reach the size limit and the new element is added. Prior to 8.6 if you wanted to do this you had to have wrappers around your enqueue to manually do this.

    Also, I agree that it is worth getting the maintenance agreement. The cost is negligible when you compare it to the benefit.

  2. Here is a simple example of a parallel timer. This was whipped up pretty quickly and I was lazy and didn't create a typedef for the ENUM for the timer events. But this gives you a simple example of how you can handle this in a parallel task. You could also incorporate this into your current state machine but it does create additional overhead. Separate tasks (parallel loop) are a nice way to handle this type of thing.

  3. I would create a parallel loop which contains your timer which is effectively a stop watch. This timer loop can be controlled by a queue so that you can pass in events such as start, stop or pause. When running this state machine will generally have one event queued which would be "update the display". When your master state machine is running you can pass the timer loop messages to start or stop your timer as well as pause or reset it. You can even package this up as a simple set of subVIs or a LVOOP object and give it a reference to your display indicator. If you do this you can reuse this in multiple applications.

  4. QUOTE (Aristos Queue @ Mar 20 2009, 08:57 AM)

    Ok, but those references could be open and closed by the still-running, not-aborted UI VIs. The state machine that's doing work that needs to be stopped doesn't have to do the cleanup. If you can't open all the needed references before the state machine starts running, you could have the state machine call back to the original VI hierarchy (though posting a message and waiting for response using two queues or user events) so that the orig hierarchy can open references on the state machine's behalf. That way those references survive the abort. Then when it does abort, the original VIs take care of closing any references that weren't closed explicitly by the state machine.

    I'm still not saying this method is good or bad; I'm just walking through arguments to see if this path works. It seems like a more effective way to stop a process than having to code boolean checks all over your code, especially when LV has such nice hooks right in the assembly code at the end of each clump of nodes to detect abort.

    But the state machine may do lots of other things that it needs to clean up besides just some open references. It may have files opens, data that should get logged, external equipment that it needs to stop or reconfigure. All of this functionality is unique to the the particular state machine and should not be exposed outside of it. This is basic encapsulation. The containing object (the state machine) should be the only thing that knows how to clean its self up. Even in the case of safety concerns and the need for immediate shutdown it is the state machine that should know what actions need to be taken to stop everything. Simply aborting the VI does not guarantee that all the stuff that needs to be stopped or turned off gets stopped off.

    Abruptly aborting VIs should be used only in the most extreme conditions such as a non-responsive or run away VI as mentioned earlier. In all other cases it is best to handle the abort event in a graceful manner to ensure that the necessary cleanup gets run.

  5. Another excellent book is the OO design pattern "bible" (Design Patterns: Elements of Reusable Object-Oriented Software) by the gang of four. This book is very good a presenting reusable design patterns for building OO systems. With respect to your question about learning C++ I don't believe that you need to learn a text based OO language but it helps to understand them somewhat. This is mainly because must books on the subject will present example implementations in one of the traditional text based languages so understanding them in a general sense helps when reading the books on the topic.

  6. QUOTE (crossrulz @ Mar 19 2009, 08:56 AM)

    But are functional globals just a crutch or are they actually better to use? Hmm...

    Functional globals are different and in a way can be thought of as very simple objects in an OOP environment. Functional globals minimize race conditions since they only allow one access to the variable at a time. They do not eliminate race conditions but they do guarantee only only access will occur at a time. In addition, they allow you to add additional functionality that are not there for a native global. And from what I have heard they are better with respect to resource usage than the native global variables. It is also easier to provide sequence control via data flow since you can include an error in and error out allowing you to impose data flow using error clusters as opposed to the data value itself.

  7. QUOTE (Aristos Queue @ Mar 18 2009, 05:07 PM)

    Another solution would be having the UI be VI A and the state machine be VI B and instead of calling VI B as a subVI, call it using a VI Reference and the Run method, then when user hits the STOP button, you call the Abort method of the VI reference. Your app as a whole keeps running, but that state machine stops.

    Is that more or less dirty? Can it be made acceptable somehow?

    This results in a rather harsh stop of the state machine. This method doesn't allow you to clean-up gracefully. I prefer to be able to process the abort rather than have the rug pulled out from under me.

  8. They are also a godsend when it comes to resolving cross linking issues. I too don't like the auto-populate but generally try to remember to turn this off. However I have found that the benefits outweigh the disadvantages. I also like everything organized together. Since we use lots of reusable components it is easy to add them via virtual folders to the project and you have a one-stop shopping list for all of the VIs that are used. Since we reuse lots of our components they are not always located on in a single area in the file structure but the project helps to overlay that organization on an application.

  9. QUOTE (SULLutions @ Mar 18 2009, 01:44 PM)

    Mainly for lurkers,

    The abort, being an abort, is likely queued "at the opposite end" so it takes precedence over regularly scheduled tasks. The simple insertion responds neatly between the regular cases of the queued state machine. Anywhere that has access to the queue can Preview Queue Element to truncate its own lengthly operations. If there are prolonged periods within subVIs that don't already need the queue, the LV2 global is probably a better solution.

    What I have generally done is to insert the abort command at the front of the stack. It will be invoked as the very next state regardless of where the state machine is. Granted, it does require that a state complete its processing and it doesn't result in a state exiting after partial execution. The processing of the abort state will run through any clean-up states that are required. You are correct though that this does not reach down into subVIs. In our applications we have several regular objects that we use and for the ones that do have lengthy subVIs (such as communications with long timeouts) we provide an abort method on the class to handle this type of action. We when process the abort event in addition to queuing the abort state when invoke teh abort methods on the appropriate classes. This really is a combination of both approaches.

  10. What we do to handle this is use a traditional producer/consumer model and allow the producer (event structure) to catch the abort request. Our state machine is a queued state machine so when we catch the abort we can inject an abort state into the running state machine and exit gracefully. This works very nicely and we don't need to use any global variables or references to controls in the state machine. In addition, we don't have to check the abort state after/before every state change.

  11. QUOTE (Yair @ Mar 17 2009, 02:16 PM)

    8.5 and 8.5.1 were binary-compatible. This meant that you didn't have to recompile your VIs when upgrading (e.g. to mass-compile vi.lib), but it also meant that if you have code which has an 8.5 bug which was fixed in 8.5.1, you would have to recompile the VI explicitly for the correct machine code to be generated.

    I definitely understand that. What I meant about the strange issue was tracking down exactly where this issue was introduced. It took a while to nail it down to something specifically in 8.5 itself.

  12. I am not sure I would classify this as a bug. Without knowing the specifics of the headers that NI is using when storing the binary data it is quite possible that enough of the information for the incorrect data gets interpreted as a valid header. In such a case it could be trying to decode the remaining portion of the file using this invalid data as if it were valid. I know when I looked at a flattened variant data it could be fairly easy to feed it garbage and have it misinterpret it as valid data.

    If the data headers are simplistic it would be easy to misinterpret garbage as valid data. In order to avoid this the headers would need to contain information such as CRCs or checksums to validate the data in the first place. If NI is doing this then I would classify it as a bug. If it uses only simplistic data headers than at best you could request this as a feature enhancement to include data validation. Otherwise this falls into your lap to validate the data before working with it.

    I can certainly see your point that you would like consist behavior but this falls into a gray area as to who must validate the data in the first place.

  13. Here is an update regarding the performance issues. It appears that I ran into a bug with the In-place structure that was in version 8.5 of LabVIEW. After experimentation and working with NI I found that I could save the VI as an 8.2 version and run it there without any problems. I could also create and save it in 8.5.1 without any problems. However, if I saved the code in 8.5 I would have the performance issue. However, if I opened the version I saved in 8.5.1 in 8.5 it ran fine. This was definitely a strange issue. Needless to say the difference in performance is significant from when it works correctly (0.04 seconds to convert) to when it is not wokring correctly (15.3 seconds to convert).

    I see an upgrade in the near future.

  14. QUOTE (postformac @ Mar 17 2009, 01:16 PM)

    Thanks but the data type of the array is correct to only be integer numbers, I want to change the display of the table and the option to change the display format is not given on the right click menu for the table.

    Oh actually I found it, I can right click on the table in the block daigram and change the data type to integer. I still have the red dot on the "convert to dynamic data" block though and the only options on the properties for input data are still floating point and boolean. Is there any way to fix that?

    Thanks

    Well my suggestion was not actually changing the data type of the 2-D array. It was still a 2-D array of doubles but what was being changed was how the data was display. My suggestion would display the data as if it were integers. Sometimes you don't have any issues when you have coercion dots but other times you do. I would try to avoid them in your code since you can get unexpected results when coercing data from one type to another. So, if you leave you data as double my earlier suggestion will modify the display so that they appear to be integers. If you change the data type of the array I recommend that you make sure you use a consist data type in your code and avoid coercion dots.

  15. QUOTE (zyh7148 @ Mar 17 2009, 03:52 AM)

    Thank you very much!

    Is this software toolkit used for SNMPv3? Are there difference between SNMPv1 and SNMPv3? Please send me some example of this software tookit.

    Thanks!

    There a couple of examples in the library file. This version supports only SNMPv1. With minor modifications it could support SNMPv2 which mainly added the GET-BULK-PDU. As mentioned previously in this thread SNMPv3 adds security to the protocol. Adding support for SNMPv3 would require a little bit more effort. However, if you are only doing simple GET requests, this version should work with your SNMP agent, even if it supports SNMPv3. SNMPv3 agents will still support SNMPv1 messages. From what you described you needed to do this code should work for you.

  16. First, good luck on your job hunt. I don't envy you having to look for a job during this economy. Secondly, and please do not take this in the wrong way, but based on several of your posts, your questions here regarding LabVIEW and your description here of how things went on this interview I would recommend that you not try to present yourself as an experienced LabVIEW programmer. This may sound harsh but one of the worst mistakes you can make going into an interview is presenting yourself or your skills as more than what they really are. Naturally you want to put your best foot forward but you also want to be honest. Having interviewed tons of prospective employees over the years I can say that I leave an interview with an extremely negative impression of someone if they try to oversell themselves. Be honest with yourself and your interviewer. I would rather have someone say to me that they don't know the answer to something that I asked than for them to try and BS an answer. This is even more important if you are interviewing with a technically knowledgeable person and not simply someone from HR who probably doesn't have a clue about the technical requirements of the job.

    Some general things you can do to make a good impression is to show a genuine interest and knowledge of the company you are inteviewing with. Come prepared with questions for the people interviewing. things you can ask are what would your responsibilities be, what is the group dynamic of the team you would be working with, or what is a typical work day like. Don't start to ask about money or benefits during initial interviews. Save those questions for when they make an offer, not before.

  17. I agree that using the disable structure is not the right way to go since it will lead to greater confusion because the structure is being used in abnormal ways. Crelf's suggestion or the Null wire suggestion both have specific meaning. They impart sequence flow where there is no inherent data flow. Either method replaces the need to wrap subVIs and introduce artificial data flow by passing an error cluster or some other data through it that is not actually used or using the very large sequence frames to accomplish the same thing.

  18. I am not sure what you are doing specifically but if the table is an indicator the user can't edit the cells. Unless you are referring to them editing it in the development environment. In that case there isn't too much you can do. One thing you can do is to lay a transparent control over it, disable it and place it in front of the table. Of course i they are in the development environment they have full control over the source and can simply move your transparent control to the back or hide it. The only real way to prevent the user from editing your controls is to build your program into a stand-alone application. That way you have full control over it.

  19. As an alternative you could keep the controls as individual controls and update them using references. If you were to do this I would probably encapsulate the management of the controls and references into a functional global and reference the specific boolean by name or some such identifier. This way they can remain as separate controls yet you would be able to manage them in a consistent and expandable manner.

  20. QUOTE (TobyD @ Mar 13 2009, 02:05 PM)

    How much free memory do you have? I ran both versions using the "Run Continuously" button and did not notice any performance lags or memory leaks, but I'm thinking that if you are running into low memory problems and having to use swap space that could slow you way down.

    I'm on a T7200 @ 2.0GHz with 2GB RAM for what it's worth.

    One machine has 2 GB of RAM with at least 1 GB free. The other has 3 GB of RAM with at least 2 GB free. I don't think memory is an issue. Our FW development team told me that our IT department has recently installed some spyware to monitor our computers that effectively broke their compiles. I am removing this to see if this is causing my problems as well.

  21. Alright now I am concerned as well as a little relieved. I am relieved that you are seeing decent performance. I have been racking my brains over this for the last couple of hours trying to see what could possibly be taking so long. At least you validated that I am not completely insane thinking this code should have been running better than what I was seeing.

    Now comes the fun part. After hearing your times I rebooted my computer and tried running it again. For a few runs I was seeing it take 3 seconds to complete. Then it jumped back to 15.2602 seconds. I then tried running it on a completely different machine and it was taking about 11 seconds on that machine. Both of these machines are decent machines (Intel Xeon CPU, 5110@1.6 GHz dual core and a Intel T2400@1.83 GHz dual core laptop) and both had crappy performance. Would anyone have any ideas what could be causing the severe performance problems? Our firmware developers ran into an issue recently and it turned out that our IT department had something running which caused their compiles to stall and take a very long time. I will check further into this. Are there other things that I should be looking for? This will be a major problem if I can't resolve what is causing the performance problems.

    BTW, the image files that I posted are the ones I have been using for testing. So, we are comparing the same conversions.

  22. OK, so I have the convert working now however I am not satisfied with the performance. I have been playing with different variations and am not sure what else I can do to improve the performance. We have a test that has hundreds, possibly greater than a thousand of these images used during the test. At present, both versions of code that I have included here take approximately 15 seconds per image for the decode. Obviously when multiplied by several hundred this really adds up. If anyone can think of any ways to improve the performance I would love to hear your suggestions.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.