Jump to content

Neil Pate

Members
  • Posts

    1,172
  • Joined

  • Last visited

  • Days Won

    106

Everything posted by Neil Pate

  1. Yes, that is what I suspected you wanted. Do you understand what the rest of the code is doing? It is displaying the value of each element in your clusters/arrays, not the possible set of values... All you need to do is make an array or cluster with one element set to each value of the enum.
  2. smithd, Ton's variant probe is great but does not do any more than the Cluster viewer class at the OP included. His question was related to the desired to display enum values as children and not in the value column, and I am trying to understand why this is desirable as an enum is not a collection of values at runtime (like a cluster or array).
  3. OK so that is better, although still I would ditch the OpenG directory and just install them properly. So I don't really understand your question. The Tree is a representation of your cluster, the hierarchy of the tree (parent/child relationship) represents the nested clusters and values. The value column shows the actual value of each element, this appears to be working correctly. What are you trying to achieve? Can you post a screenshot of what you would like to see?
  4. Your zip is all messed up, so many dependencies in the wrong place etc. Take a look at the Files tab in your project to see where all your files are. You need to include everything not in the LabVIEW directory if you want somebody else to check it out. Also, why are your OpenG VIs not inside the LabVIEW directory?
  5. I have nothing substantial to add other than I also added my calls inside conditional disable structures which I never in the end bothered to disable as performance was way in limits. The calls themselves are quite lightweight I think.
  6. Bump to this thread. I really do not like mutation history being stored. I have a stupid LabVIEW bug where the IDE thinks I am using a class which I am not (really am not). I thought perhaps it was being referenced from the class mutation history. So I poked around in the history of some of my classes and am horrified at the history that exists! I don't know about others, but my standard framework has been adapted over time (several years for my current actor based framework). I still have references in the mutation history for application specific classes from my original project! Urgh... I suppose I have no excuse, I have known about this "feature" for a while, but it is easy to forget about.
  7. Rolf, sounds like a great idea. I love those VIs but yes they can be on the slow side. I would happily accept the trade-off of >= LV2012.
  8. This is a little different to, say, LabJack; this fools the Xilinx compiler into thinking that the "standard" work-flow is being used. There could be some license agreement behind the scenes between NI and Xilinx that we do not know about.
  9. I think perhaps we have a different definition of the word *any*, but I do not wish to take your thunder away. This is very impressive what you have managed to create, clearly a lot of work has gone into it. I seem to recall seeing something like this a few years ago? This is almost certainly a non-NI sanctioned product, are you aware of any legal ramifications of piggybacking into the toolchain? What I mean is, if we have a valid FPGA Toolkit license is this actually legal? My Chinese is not so good, so the website is not much help. Do you have pricing information? Also, the Atom-RIO looks very interesting, I presume it is like a cRIO type clone?
  10. Absolutely. My top-level main GUIs are entirely static with regards to what "views" can be loaded into the main subpanel. I can only shudder to think how complicated this would all get if everything (including every single scrap of business logic) was "templated".
  11. Huh? You ask a question and then answer it yourself. Have I missed something here? Can you share with us how you can target any Xilinx FPGA?
  12. You have my respect. I gave up on resizing GUIs in LabVIEW about a decade ago... Christian, I usually customise a boolean radio control to have nice big buttons vertically, this drives the selection of the current "view" which is just what I call the VI (actor, module, whatever) running in the subpanel. On the left hand side is the customised radio button. In the middle is the subpanel and down at the bottom (the yellow bar) is where my status panel (also a subpanel) gets loaded.
  13. Sounds like you are on the right track. I do like your idea of a subpanel based config page, but in all honesty have not ever actually needed one yet (and I am talking reasonable sized projects here). I personally am not a big fan of the Actor Framework that ships with LV. I have probably not given it enough of a chance, but it just feels too busy for me. I have created my own lightweight Actor Framework (based on User Events and not queues) and that does not use the command pattern (I don't really like having a class per action/message).
  14. Christian, the original solution you have described is very common, I have definitely used it in the past, with good success. Such an implementation certainly will allow you to get up and running nice and quickly. Unless you are updating your indicators at a ludicrous rate (like > 1000 Hz) or throwing huge graphs onto the screen I would not worry about performance. Modern CPUs are just so good. Even today I still use a tab for a GUI which I know is going to be simple (like a pop-up window or something), but certainly never as my top-level. However, the problem with the tab structure is only good for logical GUI grouping, it does not help you at all on the block diagram which is where all the trouble will be. I guarantee you, if your application works well and has a predicted lifespan of more than a few years you will get more and more requests just to add another button or indicator. This will be easy at first but after point you will reach critical mass of controls and indicators where it just becomes too unwieldy for you as a developer to make changes. You will struggle much more than the compiler... If you are comfortable with the idea of a subpanel for your configuration, then yes I would take it one step further and replace your tab with a subpanel. This architecture forms the basis of most of my GUI type applications to this day. Also I probably would not use a subpanel for the config, how likely are you to want to swap that out "on-the-fly" without changing the rest of the GUI? Note that as smithd mentioned, if you do go down the subpanel route you will have multiple VIs running in parallel, so you will need to start thinking about how you will communicate between them. There are lots of different techniques (queues, user events etc), figuring out this is a great way to push your LabVIEW skills to the next level.
  15. Nope, just guessing. Something with two tabs very often ends up as ten tabs with a single event structure handling dozens if not hundreds of value change events. More than one UI loop is good, not bad. You may know what you want on the screen today...
  16. Not directly answering your question but I would say the biggest favour you can do yourself is not use tabs, subpanels are your friend here., To answer your question, I know that in the past LabVIEW's "cleverness" sometimes caused bugs with charts where data would just not be present on the indicator if was not visible, but I would not bother about unloading panels.
  17. Craig, it's not like they took features out of LabVIEW when they introduced LVOOP did they?
  18. Wow, a lot of work has gone into this! Nice toolkit Q. I have never been a big fan of XControls as they just have too many weird issues.
  19. Thanks James. I have tried everything and cannot seem to recreate it. Don't really need to just now it is bugging me! :-)
  20. I am looking through some RT code, and am a bit confused by the following node. Can anybody tell me how I would make the node with the "chip" icon on it? The VI with this in in under the RT CompactRIO target folder in the project. I am pretty sure this has something to do with Scan mode on the FPGA or shared variables or user-defined variables, however I am not very familiar with any of these. Any ideas? Edit: the C-Series module that this node refers to is currently tucked away under my FPGA Target, so perhaps this is old test code from when the module was used in Scan mode? If I move the modue out from the FPGA to just under the chassis the little glyph looks different, like a little blue square wave with a triangular wave just about it.
  21. Bingo! Seems to work. I think all my clusters will be properly named as they will come from my code, it is only the arrays which come from the OPC-UA toolkit that may have this problem.
  22. This is all part of my master-plan (insert evil voice) to use a variant repository for my across-loop data sharing experiement. Currently I am making a nice tree viewer for the variant attributes (kinda like Ton's XControl) and mostly based on this.
  23. Courtesy of the OPC-UA API that comes with the DSC toolkit. I did try to check for a variant in a variant, but it seemed to keep going down the rabbit hole. I will try the variant to array of variants I do use this elsewhere. At the moment this is getting me out of jail, but it feels wrong on so many levels. The OPC-UA toolkit kindly tells me what the data type is, it just does not bother putting that into the variant.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.