Jump to content

Mark Yedinak

Members
  • Posts

    429
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Mark Yedinak

  1. OK, so I have the convert working now however I am not satisfied with the performance. I have been playing with different variations and am not sure what else I can do to improve the performance. We have a test that has hundreds, possibly greater than a thousand of these images used during the test. At present, both versions of code that I have included here take approximately 15 seconds per image for the decode. Obviously when multiplied by several hundred this really adds up. If anyone can think of any ways to improve the performance I would love to hear your suggestions.
  2. The first time I saw it was back in teh late 70's or early 80's. It was shown before a movie I was at. It was at a drive-in of all places.
  3. QUOTE (TG @ Mar 12 2009, 07:25 PM) Unfortunately you'll have to wait until NI implements it though. Crelf was merely making a suggestion for an alternative to the null wire concept.
  4. QUOTE (crelf @ Mar 12 2009, 03:45 PM) I like it. This would be a nice addition and it would eliminate a special wire type.
  5. QUOTE (Aristos Queue @ Mar 12 2009, 03:02 PM) Well, you still end up processing the event to some degree. In the case of this X-Control I don't know if you would really be saving much time. QUOTE (Aristos Queue @ Mar 12 2009, 03:02 PM) 2. Catch the event and rethrow as a different event that is handled somewhere else. As of LV 8.6, there are lossy queue primitives, so you can enqueue your event into a separate handler with the lossy behavior, so if the queue fills up, you just start dropping updates. Unless you end up spawning a background task for your X-Control I don't think this solution would work very well for X-Controls. If you did queue something into a lossy queue what part of your code is servicing it? X-controls are intended to encapsulate the processing required for complex custom controls. I don't think it would be a good idea any of the processing to be handled outside of the X-Control itself (if you did it sort of defeats the purpose of making it an X-COntrol in the first place) and would a separate spawned task from the X-Control itself have access to the display? I guess I would have to look at the rolling LED X-Control example. I do agree with you though that we as programmers need to have control over how the events are filtered as well as if and when. But it would be nice if we had more flexiblity then what we currently have.
  6. QUOTE (crelf @ Mar 12 2009, 03:25 PM) This is basically what I have been suggesting with the Null wire. The Null wire would simply connect to a VI without being required to be connected to the connector pane. Essentially we are both suggesting the same thing with the exception that your suggestion would allow you to use any wire to control the sequence whereas mine would require the Null wire. Either way I don't think anyone in this discussion would disagree that we could improve upon the current sequence frame.
  7. QUOTE (menghuihantang @ Mar 12 2009, 12:59 PM) Yes. When I say style I mean things like aligning objects on the FP, minimizing bends in wires, no backward flowing wires, labeling wires and constants, and such items. NI does have their preferred ways for these things and it is best to follow theirs style for the exam. If you have access to the VI analyzer you would have a very good idea of the types of things NI will look at when grading your code style.
  8. QUOTE (jlokanis @ Mar 12 2009, 12:50 PM) Are you saying you need to have multiple UIs in your system or that coordinating your data for multiple applications on multiple machines to a single UI is the problem? If it is the latter you could use network queues to send data back and forth from the controlling UIs to the slave applications. If you want to have bidirectional communication I would have a master UI queue that all of your individual applications can post to to inform the UI that an update is required. As part of your message you would need an application ID. Each application would have its own queue for receiving updates from the UI. The master UI would need to manage the application queues. As processes start up they register with the UI an give it the connection specifics for its queue. The connection specifics would need to be static for the UI code (fixed server address and port number) so all of the remote processes know how to register with the UI. If the UI wanted to broadcast an update to all remote applications it could simply iterate over all of its remote queues and post the event.
  9. QUOTE (ldindon @ Mar 12 2009, 04:54 AM) QUOTE (ldindon @ Mar 12 2009, 04:54 AM) When you said "Internally", do you mean inside the XControl? If the answer is "yes" I do not understand what you mean. I agree that you need to make your code as bullet proof as possible and you don't have control over how your user's will use your X-Control. I also agree that NI could improve the X-Control itself by allowing updates to be filtered or flushed to allow for situations like this. However, when I was referring to using the value internally I was referring to optimizing the architecture of the code to avoid things like rapid updates to a UI display. In general, it is best to avoid doing this since it will impact your overall performance. I usually don't worry too much about this if the task I am working on will not be doing any type of high frequency calculations or processing. However if I am working on something that will require high performance I like to completely separate the UI from the processing and update it at a rate that is acceptable to humans and still provide "realtime" updates yet don't overburden the application by doing unnecessary work, such as updating the value of a control thousands of times per second.
  10. QUOTE (crelf @ Mar 12 2009, 12:18 AM) Yes, I was remiss in doing that last night. It was getting late, I was tired and hungry and wanted to leave work after a 13 hour day. Anyway, here is the code and a few PCX images that I have been testing
  11. Well, so far it looks like neither of these solutions will work. The Image Toolbox only supports 8-bit per pixel PCX images and mine are 1-bit per pixel image. The Image Toolbox doesn't include any block diagrams so I wouldn't be able to modify them for the different color depth. The IMAQ solution doesn't know how to handle the run length encoding used by the PCX format. I have been trying to roll my own but so far the images aren't displaying correctly. It appears that I am doing decode of the RLE data correctly but the image definitely is not being displayed correctly. If anyone has any other suggestions I would appreciate hearing them.
  12. Does anyone know if there is any LabVIEW code for converting a graphic image stored as a PCX image to a raw bitmap? In our printer testing we have the capability of receiving the label image which will be printed. Some of our printers return raw bitmpas and others use the PCX format. Since many of our existing tests were developed for the printers which use the bitmap fomrat I would like to convert the PCX images to a raw bitmap. Does anyone have code for doing this?
  13. QUOTE (nickg @ Mar 10 2009, 12:24 PM) You need to see how your clock is using the Ethernet chip set. In order for it to use UDP or TCP it has to have a network stack running on it that supports the IP protocol as well as UDP and TCP protocols. If the clock is only supporting the Ethernet layer than you will not be able to communicate to it directly using LabVIEW. There is more to communicating over a network than simply pluggin a device into a network switch or hub. You need to know the specific protocols and ports being used in order to get two devices talking to each other. As for the examples they should be available. What LabVIEW package are you using? Are you using the base LabVIEW package, the developer's suite, a student edition or an evaluation version? Not all features are available in all packages, though I thought that the basic TCP and UDP options were available in all versions.
  14. QUOTE (TobyD @ Mar 10 2009, 09:47 AM) In addition to the advice above I would say that you should look at the examples for VISA communications.
  15. I think we need more information before we can really give you any help. What is controlling your circuit that you are trying to communicate with? Is this a third party piece of hardware or is this something you are developing? Both ends of the communication need to be network aware so simply sending data from your computer over the network may not work. The circuit will need to be listening for the message on the appropriate port whether it is a UDP or TCP port. As for programming the examples provided in LabVIEW for both the UDP and TCP VIs should help.
  16. It sounds like your object could be refactored. Rather than initializing the map with a reference to a vehicle object why don't you have a method on the map object for plotting a vehicle's position. Simply call this method with the correct vehicle reference each time you need to plot a position. This will work equally well for a single vehicle or multiple vehicles because the vehicle information is only required and passed to the method that needs it. The vehicle reference should not be an attribute of the map.
  17. QUOTE (crelf @ Mar 6 2009, 12:12 PM) I can only provide anecdotal evidence that there is no memory leak. As I stated earlier in our environment we are constantly opeing and closing connections. If there truly was a memory leak we would have all kinds of issues with our applications.
  18. Your approach to detect the error, close the connection and reopen should work fine. You may lose some data in the process. The framing error is most likely caused by some clock drift. Reestablishing the connection should get you back in sync. QUOTE (neBulus @ Mar 6 2009, 09:48 AM) I don't agree with this. In our applications we open and close VISA sessions tens of thousands of times and we do not see any problems.
  19. Personally I prefer the producer/consumer pattern. I like to have all of the user interface events handled by an event structure. This provides faster feedback for the user, even if the actual action is not taken immediately. You can always let the user know that it is being processed. With a state machine only architecture you always have to wait for a state to complete before you can detect input from the user. In some cases this can be a long time and the user is left wondering if the application is aware of their input.
  20. Wouldn't a functional global be more flexible? As a rule I try to avoid using global variables and when I do need to use them I opt for functional globals over standard global variables. Functional globals help to minimize race conditions since only one access at a time is allowed. In addition, you can include other functionality if required.
  21. QUOTE (Thang Nguyen @ Mar 5 2009, 01:05 PM) I don't think this is possible. You will have to use a ring control and replace the items on that dynamically.
  22. My guess is that you are using some external DLL or possibly some .NET stuff that the application builder is not finding. I had the same cryptic error message on an application once that was using some .NET stuff and .NET was not installed on the computer. Only in my case the application built without any errors and only threw the error when it was executed.
  23. QUOTE (mesmith @ Mar 5 2009, 12:05 PM) I agree that a basic string is more portable. From what I have observed the variant type is fairly easy to decode however since this is an NI internal structure there is no guarantee that things won't change over time. Variants aren't much more than a flattened string with additional type and size information embedded in the string. Personally I like the structure used in SNMP messages. (Actually the entire SNMP packet is constructed this way) Each item consists of three parts. The first part is the data type, the second part is the length of the following data and the third part is a variable length byte array which is the data itself. The type and length fields are fixed length. This structure is easy to decode and it is platform and language independent. Of course your message is not necessarily sized the most efficiently however this is generally not that big of an issue. My suggestion of using variants is more of a LabVIEW only solution but it didn't sound like other applictions would be involved. Your approach is more universal. QUOTE (jdunham @ Mar 5 2009, 12:35 PM) I have some wrapper VIs which invoke the SVN command-line client via System Exec.vi. I guess I should open-source them, since they are pretty simple. I'm sure other heavy SVN users have something similar. Jason If you could package them up that would be great. I am working on an automated test system that has literally thousands of data files that we need to manage. We have discussed using CVS for this but I would prefer to go with SVN. This is a feature we will be adding in the near future and if we could use some existing code it would save us some development effort.
  24. QUOTE (ldindon @ Mar 5 2009, 10:35 AM) What you say is true but since the X-control is a user interface item you could simply gate when you actually updates its value. Internally if you need to work with the latest and greatest value you could use a shift register or some other mechanism which does not trigger the event in the first place. There was another discussion regarding the event structure and its queuing of events which does highlight the desire for the us to have the ability to filter events, discard previous events of the same type or offer some type of "debounce" on the events.
  25. QUOTE (dblk22vball @ Mar 5 2009, 10:35 AM) That's good. I would still consider using the message broadcast instead of polling. It will be much more efficient and it will require very little overhead in your applications.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.