Jump to content

Mark Yedinak

Members
  • Posts

    429
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Mark Yedinak

  1. QUOTE (Aristos Queue @ Mar 12 2009, 03:02 PM)

    Well, you still end up processing the event to some degree. In the case of this X-Control I don't know if you would really be saving much time.

    QUOTE (Aristos Queue @ Mar 12 2009, 03:02 PM)

    2. Catch the event and rethrow as a different event that is handled somewhere else. As of
    LV
    8.6, there are lossy queue primitives, so you can enqueue your event into a separate handler with the lossy behavior, so if the queue fills up, you just start dropping updates.

    Unless you end up spawning a background task for your X-Control I don't think this solution would work very well for X-Controls. If you did queue something into a lossy queue what part of your code is servicing it? X-controls are intended to encapsulate the processing required for complex custom controls. I don't think it would be a good idea any of the processing to be handled outside of the X-Control itself (if you did it sort of defeats the purpose of making it an X-COntrol in the first place) and would a separate spawned task from the X-Control itself have access to the display? I guess I would have to look at the rolling LED X-Control example.

    I do agree with you though that we as programmers need to have control over how the events are filtered as well as if and when. But it would be nice if we had more flexiblity then what we currently have.

  2. QUOTE (crelf @ Mar 12 2009, 03:25 PM)

    That's a good idea - or maybe you could right-click on a VI/primitive and select "Visible Items > Sequence Container" which would add a 3 pixel border around the VI/primitive that we can wire any datatype to. Then it'd be obvious which wires are for dataflow only and which ones are connected to the connector pane.

    This is basically what I have been suggesting with the Null wire. The Null wire would simply connect to a VI without being required to be connected to the connector pane. Essentially we are both suggesting the same thing with the exception that your suggestion would allow you to use any wire to control the sequence whereas mine would require the Null wire. Either way I don't think anyone in this discussion would disagree that we could improve upon the current sequence frame.

  3. QUOTE (menghuihantang @ Mar 12 2009, 12:59 PM)

    What do you mean by 'style'? Is this what they called that you have to follow NI's favorites other than your own style?

    Yes. When I say style I mean things like aligning objects on the FP, minimizing bends in wires, no backward flowing wires, labeling wires and constants, and such items. NI does have their preferred ways for these things and it is best to follow theirs style for the exam. If you have access to the VI analyzer you would have a very good idea of the types of things NI will look at when grading your code style.

  4. QUOTE (jlokanis @ Mar 12 2009, 12:50 PM)

    Great link! Thanks!

    I have considered load balancing my test system across multiple machines but it does make it a nightmare to do UI interaction. LabVIEW just does not offer an easy way to embed UIs from other machines. If only I could have a VI in a sub panel that was actually running on a different computer over the network!

    But, for now, I will be looking at a Nehalem based system to get me over the hump. It is supposed to address the memory access and multicore bottlenecks and yield at least 30% improvement over Penryn based cores.

    -John

    Are you saying you need to have multiple UIs in your system or that coordinating your data for multiple applications on multiple machines to a single UI is the problem? If it is the latter you could use network queues to send data back and forth from the controlling UIs to the slave applications. If you want to have bidirectional communication I would have a master UI queue that all of your individual applications can post to to inform the UI that an update is required. As part of your message you would need an application ID. Each application would have its own queue for receiving updates from the UI. The master UI would need to manage the application queues. As processes start up they register with the UI an give it the connection specifics for its queue. The connection specifics would need to be static for the UI code (fixed server address and port number) so all of the remote processes know how to register with the UI. If the UI wanted to broadcast an update to all remote applications it could simply iterate over all of its remote queues and post the event.

  5. QUOTE (ldindon @ Mar 12 2009, 04:54 AM)

    QUOTE (ldindon @ Mar 12 2009, 04:54 AM)

    When you said "Internally", do you mean inside the XControl? If the answer is "yes" I do not understand what you mean.

    I agree that you need to make your code as bullet proof as possible and you don't have control over how your user's will use your X-Control. I also agree that NI could improve the X-Control itself by allowing updates to be filtered or flushed to allow for situations like this. However, when I was referring to using the value internally I was referring to optimizing the architecture of the code to avoid things like rapid updates to a UI display. In general, it is best to avoid doing this since it will impact your overall performance. I usually don't worry too much about this if the task I am working on will not be doing any type of high frequency calculations or processing. However if I am working on something that will require high performance I like to completely separate the UI from the processing and update it at a rate that is acceptable to humans and still provide "realtime" updates yet don't overburden the application by doing unnecessary work, such as updating the value of a control thousands of times per second.

  6. Well, so far it looks like neither of these solutions will work. The Image Toolbox only supports 8-bit per pixel PCX images and mine are 1-bit per pixel image. The Image Toolbox doesn't include any block diagrams so I wouldn't be able to modify them for the different color depth.

    The IMAQ solution doesn't know how to handle the run length encoding used by the PCX format. I have been trying to roll my own but so far the images aren't displaying correctly. It appears that I am doing decode of the RLE data correctly but the image definitely is not being displayed correctly. If anyone has any other suggestions I would appreciate hearing them.

  7. Does anyone know if there is any LabVIEW code for converting a graphic image stored as a PCX image to a raw bitmap? In our printer testing we have the capability of receiving the label image which will be printed. Some of our printers return raw bitmpas and others use the PCX format. Since many of our existing tests were developed for the printers which use the bitmap fomrat I would like to convert the PCX images to a raw bitmap. Does anyone have code for doing this?

  8. QUOTE (nickg @ Mar 10 2009, 12:24 PM)

    well i am working on a digital clock that counts by itself. But every 12 hours it communicates with the computer via a ethernet mini-board (contains the ENC28j60 chip from microchip ). Therefore, i am trying to find a way through labview to copy the computer clock to my digital clock (to act like a refresh if my clock has a delay in counting).

    i have labview 8.2.1 and for some reason i cant open the UDP exemples labview offers me :s

    You need to see how your clock is using the Ethernet chip set. In order for it to use UDP or TCP it has to have a network stack running on it that supports the IP protocol as well as UDP and TCP protocols. If the clock is only supporting the Ethernet layer than you will not be able to communicate to it directly using LabVIEW. There is more to communicating over a network than simply pluggin a device into a network switch or hub. You need to know the specific protocols and ports being used in order to get two devices talking to each other.

    As for the examples they should be available. What LabVIEW package are you using? Are you using the base LabVIEW package, the developer's suite, a student edition or an evaluation version? Not all features are available in all packages, though I thought that the basic TCP and UDP options were available in all versions.

  9. QUOTE (TobyD @ Mar 10 2009, 09:47 AM)

    I would start here...

    1. Turn on your computer

    2. Open LabVIEW

    3. Click "Empty Project"

    Seriously though...that is a really vague question. You need to start by learning enough about LabVIEW to be able to ask a specific question. I would recommend the Basics I & II classes offered by NI, or buy LabVIEW for Everyone and go through it on your own. Either one of these methods should give you enough knowledge to write a simple serial communications control program.

    In addition to the advice above I would say that you should look at the examples for VISA communications.

  10. I think we need more information before we can really give you any help. What is controlling your circuit that you are trying to communicate with? Is this a third party piece of hardware or is this something you are developing? Both ends of the communication need to be network aware so simply sending data from your computer over the network may not work. The circuit will need to be listening for the message on the appropriate port whether it is a UDP or TCP port.

    As for programming the examples provided in LabVIEW for both the UDP and TCP VIs should help.

  11. It sounds like your object could be refactored. Rather than initializing the map with a reference to a vehicle object why don't you have a method on the map object for plotting a vehicle's position. Simply call this method with the correct vehicle reference each time you need to plot a position. This will work equally well for a single vehicle or multiple vehicles because the vehicle information is only required and passed to the method that needs it. The vehicle reference should not be an attribute of the map.

  12. QUOTE (crelf @ Mar 6 2009, 12:12 PM)

    Can someone from NI confirm or deny this memory leak? Ben - have you reported it to NI?

    I can only provide anecdotal evidence that there is no memory leak. As I stated earlier in our environment we are constantly opeing and closing connections. If there truly was a memory leak we would have all kinds of issues with our applications.

  13. Your approach to detect the error, close the connection and reopen should work fine. You may lose some data in the process. The framing error is most likely caused by some clock drift. Reestablishing the connection should get you back in sync.

    QUOTE (neBulus @ Mar 6 2009, 09:48 AM)

    I question the need to close and re-open. Just flushing the port and doing a retry would be my first attempt at handling this situation. Closing and re-opening a VISA session will bite you eventually uless the app is restarted regularly.

    I don't agree with this. In our applications we open and close VISA sessions tens of thousands of times and we do not see any problems.

  14. Personally I prefer the producer/consumer pattern. I like to have all of the user interface events handled by an event structure. This provides faster feedback for the user, even if the actual action is not taken immediately. You can always let the user know that it is being processed. With a state machine only architecture you always have to wait for a state to complete before you can detect input from the user. In some cases this can be a long time and the user is left wondering if the application is aware of their input.

  15. QUOTE (mesmith @ Mar 5 2009, 12:05 PM)

    I agree that a basic string is more portable. From what I have observed the variant type is fairly easy to decode however since this is an NI internal structure there is no guarantee that things won't change over time. Variants aren't much more than a flattened string with additional type and size information embedded in the string.

    Personally I like the structure used in SNMP messages. (Actually the entire SNMP packet is constructed this way) Each item consists of three parts. The first part is the data type, the second part is the length of the following data and the third part is a variable length byte array which is the data itself. The type and length fields are fixed length. This structure is easy to decode and it is platform and language independent. Of course your message is not necessarily sized the most efficiently however this is generally not that big of an issue.

    My suggestion of using variants is more of a LabVIEW only solution but it didn't sound like other applictions would be involved. Your approach is more universal.

    QUOTE (jdunham @ Mar 5 2009, 12:35 PM)

    I have some wrapper VIs which invoke the SVN command-line client via System Exec.vi. I guess I should open-source them, since they are pretty simple. I'm sure other heavy SVN users have something similar.

    Jason

    If you could package them up that would be great. I am working on an automated test system that has literally thousands of data files that we need to manage. We have discussed using CVS for this but I would prefer to go with SVN. This is a feature we will be adding in the near future and if we could use some existing code it would save us some development effort.

  16. QUOTE (ldindon @ Mar 5 2009, 10:35 AM)

    Off course I do not need to update the display that frequently.

    Unfortunately the XControl architecture does not allow (as far as I know) to update the display less frequently.

    Each time the data changed the XControl facade ability vi is called to handle the event then the event structure timeouts and the vi exits.

    If in the meanwhile someone has changed the XControl value 10 times, then 10 events will be stacked somewhere by Labview. It will result to 10 different calls to the XControl facade vi to handle the events one by one. When you are handling an event you did not know that 10 others of the same type are waiting so there is no room for optimization by handling only the last one in the queue and skipping the others.

    A solution would be to invalidate the display each time the data changed and to update it asynchronously outside the XControl facade vi at a reasonnable refresh rate for instance. But I have no idea how to achieve such a solution with the XControl "API".

    What you say is true but since the X-control is a user interface item you could simply gate when you actually updates its value. Internally if you need to work with the latest and greatest value you could use a shift register or some other mechanism which does not trigger the event in the first place.

    There was another discussion regarding the event structure and its queuing of events which does highlight the desire for the us to have the ability to filter events, discard previous events of the same type or offer some type of "debounce" on the events.

  17. QUOTE (dblk22vball @ Mar 5 2009, 10:35 AM)

    ok, so going off of the thought of the fixtures accessing the master file on the server at all times, I asked IT about this.

    They are actually going to give me my own drive and they said that I could poll away, since they can put it on a spare server....

    based on previous experience, I was not expecting that....

    i will also have the program local cache it as a back up.

    thanks

    That's good. I would still consider using the message broadcast instead of polling. It will be much more efficient and it will require very little overhead in your applications.

  18. It would seem that you have several options. One would be for your applications to poll the status of the configuration file to see if it has been modified and update itself if it has. It could use the modification date of the file for this. This would also mean that your computer's clocks should be synchronized. A second option would be to have your applications update itself at some regular interval. The third, and most complex option is to have a central sever application which your applications would register with. If any instance of your application updated the configuration it would notify the update server which in turn would send a message to all registered applications that the update has occurred. A slight variation of this approach would be for each application have a process that listens for broadcast messages. You could use UDP for this. When an application updated the configuration settings it would broadcast a message indicating that the update has occurred and all of the other applications could update their settings accordingly. This option is probably the most flexible. It would require that your IT staff allow UDP broadcast packets on your network and may require them to open a specific port on the network. You could create a simple protocol for this where your broadcast messages are a message type and a variant. Based on the message type you could send additional information in the variant data and the message type would dictate how to interpret the data itself.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.