Jump to content

JamesMc86

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by JamesMc86

  1. Thanks for all your suggestions.

    I think I will probably go with the OOP mechanism for the challenge as much as anything! It does look like the DOM parser could be used in this way (I like the idea of SQLite as well, I'm going to be playing with this in another part of the project anyway, I think this can just operate in memory though) but I think doing it in OOP will keep it more reusable and portable for the future.

    EasyXML is one I have heard of but in this case XML is only a small part of what I need, I also just want an easy way to deal with a tree structure within the application.

  2. Its actually not that one but you get three or four errors the last of which mentions bool.

    I have seen similar errors before, there is a bug in 2010 where if you have FPGA nodes and remote debug an RTEXE these get thrown and from that I know from experimenting from that each message will correspond to one node causing the issue (in that case it corresponded to the number of read/write nodes).

  3. Hi,

    I am working on something that I was looking to see if anyone had any experience on.

    I am working on an application which has a need for a heirarchal data structure in a couple of places. The main one being a representation of a file structure (not one that actually exists on disk) but also a couple of others that will end up being generated XML files. I am torn between two key options:

    1. Impelent in a pure LabVIEW method (there is an example I would start from on the NI Community). I would be tempted to have a couple of options, one that is a pure linked list style implementation. One that also uses an variant dictionary to track the URLs of nodes to make a URL based lookup a constant time function (but will slow modifications to the existing tree).

    2. Use the Microsoft DOM interface (.net based I think in LabVIEW but could be activeX). Whilst this is designed to used for files you I think you can modify it purely in memory and would mean my file writing routines are done). I am not fussed about portability (this will only run on windows) but I am concerned about the performance. While my trees will be quite small it may not perform well in memory. Or is the performance good and not worth the effort of the first implementation?

    Has anyone come accross a similar problem and have any advice on these approaches or have alternative approaches?

  4. I'm also kind of on board. Certainly for the first level there seems to be little benefit and much complexity added. Another way you could read this stage is that the type here is a property of the first message rather than a unique class of message.

    For the second stage it seems there could be more to gain but not huge amounts. You would add some scalability at your read message stage but in just that area for the extra complexity that propagates throughout the code. (I would get frustrated if I had to dig through two layers of delegates each time I wanted to read the code that was actually running)

    My 2c, I think the suggestions made look good if you wanted to dispatch it but I think (with the limited scope of what you have described here) it could be unnecessarily complex

  5. The example is the sample project for the actor framework. Open the new create project wizard from the splash screen and load either the actor framework template project or the evaporative cooler project (an example of how to use the template) and run these up. You will see the splash screen that AQ was referring to.

  6. What is the message? Is it simply an Ack or something more.

    I think this is a question of responsibility. It sounds like this manager is a zone manager. It is receiving messages and dispatching to the zones so it strikes me that likely it should also be responsible for returning messages about the zones. Now if it is a simple ack that the zone has been modified, as the zone manager is responsible for performing the modification it should also be responsible for generating and sending the ack message. If this is something more specific then it should probably be generated from the zone somehow but the zone manager is still responsible for communication with the outside world and should be in charge of the communication.

    In the context that you have given I think this would be my preferred route but I suspect any that you describe could be successful.

  7. I would make another vote for a TCP version. VI server isn't going to be designed for messaging performance so there is going to be trade offs but the performance seems like the biggest problem. Yes calling is simple but you don't get anything for free, each message requires:

    1. Connection made to LabVIEW application.

    2. Remote VI found, loaded and called.

    3. Reference opened to existing queue.

    Each of these offers a new point of failure in the system.

    What you could do as the best of both worlds. You should be able to make some TCP code to accept requests and package this into a single VI which you can just drop to any application to add the ability. This could then maintain connections to clients to give the best performance but there's nothing to say you can't still make a calling VI that opens a reference, sends the message and and closes the connection again.

  8. I still haven't gotten my head around this VI.

    From the help file (and Icon) it appears that it is specifically designed for downcasting in the case where LabVIEW may not think it can go as far down the hierarchy as you know it can. I think it is specifically for downcasting as upcasting can be done just by wiring (with a coersion dot) but the examples here show it can be used for upcasting as well.

    My question is what exactly is the difference between this and the stock downcast. I believe the downcast will also throw an error if the cast cannot be done. It appears to be designed to be an inline operation, everything suggests there must already be a object.

  9. It's true the cache should contain all that is needed although I wonder if changing platforms has more practical issues in tracking the changes, at the minute I have not been able to dig out much on the algorithm that is used.

    As a thought is this something where packed project libraries could help? You could have most of the code compile for x64 in these which will alleviate the issue if your code organisation suits this. I guess it would require a change in dependency locations as well though so maybe not

    (null)

  10. I have the advantage of working for NI so I have ready access to PXI FPGA boards which can read the lines and return what they see. If you have any this is quite straight forward can can give 5ns resolution.

    The risers look good but remember that the trigger lines are one the additional connectors, not cPCI/e connectors.

    (null)

  11. To do this without any sort of structure around it is really tough, computers are still way behind where humans are in terms of object recognition. There are a couple of main approaches that I can see:

    1. create a depth field using a 3D Imaging technique ala Kinect. This is the approach that many take as the hand will then stand out from the background and be distinguishable.

    2. Single camera object tracking. The vision module has some pretty good object tracking algorithms included in it now. One of my colleagues has been playing and managed to get it to track his eyes. The proble. With this will be identifying the object in the first place, the hand, and it staying consistent enough. This would also require whole hand gestures, it is unlikely to be able to pick up finger movements. I suspect a scheme similar to this is what the new Samsung interfaces uses, although that appears to pick up some elements of hand changes.

    I hope that helps, there maybe more schemes out there but these are the two that I am aware of.

    James

    • Like 1
  12. I don't know exactly how VIPM deals with this but there are two external interfaces to VI server, either TCP or activeX. My thought is they would have to use activeX to be able to actually launch the application, but this would be Windows only.

    Of course there maybe something I'm missing, you could also launch LabVIEW using system exec, but you would not be able to control it through this.

  13. Sorry inline was a bad turn of phrase, what I meant was a subVI in the process rather than a parallel process. I'm not sure what target you are using but if you are pushing it that hard then I would think that the transport of the data and thread swap maybe a problem overhead as well but it would be interesting to know if not. That being said if the process can continue processing data while the main thread continues then this would seem a good way to go.

    I would be interested to know how this goes, I'm always intrigued by oop on RT and primarily where it might break down.

  14. What are these VIs doing? Are they too slow because of the processing or because of the dynamic dispatch.

    I am intrigued by a form of channeling pattern for processes so that you can have common code. Swapping the dd vi means changing the object type so I gues you need some form of fsctory method which loads the new object with the new VI. I would think about breaking down these processes in a more defined and/or fixed way though where possible. I highly doubt this will be faster than an inline VI (but will make the code parallel) and debugging is going to be a nightmare if the processes do have fixed responsibilities.

  15. I would say yes. A good API should throw an error if it is unsuccessful and an absence of an error should mean the action has been taken successfully. I would say if something didn't throw an error in this case it is a bug with the API.

    If something works but extra messages are required then warnings should be used although not many of the core Apis use these often.

    In terms of memory leaks I would suggest that it is bound to be possible if there are bugs however I think if you are getting repeated errors to an extent these are leaking memory then you have a different issue. If you detect the error and shutdown the application then I would argue the leak is somewhat irrelevant as it won't occur under normal working conditions.

  16. Surely the same logic can follow for these as well?

    I think this whole discussion highlights the need for a solid error handling strategy. There are specific errors you can handle there and then such as a network connection failing and use local error handling such as retrying a failed opening but good central handling will catch and handle those errors that you cannot anticipate and I promise you cannot consider every error!

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.