Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. Hopefully the image explains my question pretty well.

    Not really, I’m afraid. :)

    The “too more specific class” function only affects the wire types. Your saying “stuff the object on this parent wire into this child wire, if possible”. But though your selector returns a child object, that’s at runtime, not edit time, where the wire type out of the index array can only be the parent wire. So “too more specific class” isn’t doing anything here.

    Does that help? I can’t actually tell what you want to happen. Actually, what are you using to get the name of the class in your subVI? Is that what isn’t working?

    — James

  2. We have inevitably run into two road blocks.

    1. The “default” issue mentioned above. There is no reasonable way, to my knowledge, to deal with this issue.
    2. Access to type information (especially to build an object from XML data). In the thread mentioned above, I sought the default value of the class. This is actually not sufficient (at least not in the form we have it). What we need is access to the class data cluster (i.e., be able to assemble a cluster of the proper type and values)--and then be able to write the cluster value to the object. I don’t think this is currently possible.

    What about using the LVClass references (like here) to determine the default object’s data clusters? AQ says that LVClass references are very slow, but, one could keep a look-up table of the needed information inside the Flattening VI (i.e. each object type needs to have its default info determined only once). I was looking into this stuff a few weeks ago, and it seemed to me that one could get around the “default” issue this way. Then one could make a VI that flattens an object, and then fills in the missing default pieces. One could also make VIs that alter the cluster values this way (I was actually thinking of making a VI that could take objects of different types, such as parent/child, and copy from one the the other all the common levels of cluster data).

    I never did any actual tests, so there may be a roadblock, but I think LVClass references can give one the information one needs.

    — James

  3. I'm glad to hear the messaging system can be so fast if done properly.

    Who you calling fast? That’s the first time I measured it and I thought it was slow. Something built for speed would be much faster. But Property Nodes, and anything that runs in the UI thread, are REALLY slow.

    Do you place each data value into a waveform to attach the timestamp? I was using waveforms in the last revision but it seemed to use up too much space. A file with only 30-40 data points per minute would be megabytes in size after a couple of days. I don't know why, it was in .tdms format and every time I took x readings I would average them, convert the result to waveform data, and append it to the file. Since it was not streaming data I didn't know of any better way. Now I just store a double value in the .tdms file which takes much less space than the waveform. I would love to store a timestamp for each value so if you have a good compact method let me know please. I need my data file to be tens of kilobytes, not megabytes.

    Using a waveform for individual reading does NOT seem like a good idea. Why not just store two channels: “Measurement” and “Measurement_Time”? That’s what I do.

    Can either of you comment on the sub-panel vs. opening/closing GUI's question?

    I’ve never noticed a problem with subpanels, but I’ve never looked into the speed.

    Also, any suggestions on a better way to customize controls/indicators on the GUI at runtime. Right now I have to store all the captions, limits, marker spacing, etc...in a file/cluster and load them in using property nodes for each control.

    I’ve never done anything like that. Usually, I have most of the detail of an instrument in it’s own Front Panel, with only the key summary on a main screen of the whole application. So, for example, if I have a temperature controller, the main app might just show the temperature, the set point and some other summary info (like “ramping”). Clicking on the temperature indicator (or selecting from a menu or whatever) would bring up the “Temperature Controller Window” (could be in a subpanel) with a full set of controls (PID settings, maximum temperature, alarms) and indicators (chart of last ten minutes temperature and heating). The secondary window would be specific to that controller, so it doesn’t need adjusting, while the main app window would have only the general info that doesn’t need to be changed if you use a different controller.

    Another technique is to realize that most simple indicators are really just text, or can easily be represented by text, and thus one can do a lot with a multicolumn listbox or tree control to display the state of your instruments.

    — James

  4. 3. What about updating indicators? It seems that passing UI indicator references to CHx is faster than CHx sending messages to the UI (or to mediator then to UI). I need good response time: Example - I want an LED to light up on the front panel when a heating relay is turned on and off. The relay may only be on for 50ms every second. Can I really send a LED ON msg and then an LED OFF msg and expect it to work smoothly? For 2-8 channels at once?

    Actually, I’d expect sending a message via a queue and updating a terminal to be faster than using an indicator reference and a property node. Property nodes are quite slow.

    I did a quick time test with the messaging system I use (which is very similar to Lapdog); a round-trip command-response (i.e send a message via one queue and get a response back via another) was about 250 microseconds, or about 130 microseconds per leg. A single Value Property node on a Numeric indicator was between 300 and 500 microseconds.

    Also, note that a delay is unimportant if it’s equal at each end; the relay indicator will still be on for 50ms. One could also switch to indicating heater duty cycle (x%) rather than flash an LED.

    — James

  5. I am curious as to your application where you need a changeable menu at run-time?

    Ah. I don’t use it to change the menu at run time. I use it to make adding/chnaging options very easy. If I needed a new option, say display the X scale in photon energy instead of wavelength, I would just drop a boolean in the Options cluster and name it “Display X axis as photon energy”, and go change the display update code to read that option. Job done. No need to manually configure a runtime shortcut menu, then write code to interpret the tags and set the appropriate state parameter in the shift register (with all the additional potential for bugs/misspelling). Also, if the Enums are type-defs, the menu will automatically adjust to changes in the type-def, without me having to manually update the runtime menu.

    Basically, it avoids doing something (defining the options) twice, in the code and in the menu setup, and avoids the necessity of custom interface code between the two.

    • Normally OpenG functions are not set to be re-enterent unless there is a specific need therefore, I have turned this setting off as it makes it easy for end users to debug plus there are associated memory costs with launching clones

    How does the new “inline” option play in to that? In has always seemed strange to me that OpenG VIs are mostly non-reentrant. It makes them different from the LabVIEW primitives that they supplement/extend. But that might be a different conversation.

    Though I note that you did remove the “Get Default” subVI from “Return Class Name” to improve performance. In doing so you eliminated the self documenting advantage of the subVI; a new LabVIEW user will be mystified as to what “Preserve RunTime Class” is doing here. Would it not be better to “inline” that subVI and use it? Or just accept a performance hit (which I think is quite small)? At the very least we need a comment here.

    Side Note: I noticed that OpenG has an off-palette subVI for parsing a PString (“Get PString_ogtk”) that is used in the Variant VIs. Perhaps that should be used in “Return Class Name”, where we are walking along a set of PStrings?

    • I don't see the benefit of having two VIs (originally Class Display Name and Qualified Class Name) and maintaining two similar pieces of code - so I merged the two

    The only advantage is getting the short name slightly faster, but that is probably not a good enough reason for a separate VI. BTW, maybe we should call the short name form “Base Name” or “Class Base Name”.

    Everything else looks fine to me. The only quibble is that the documentation for “Return Class Name” singles it out as being “highly inefficient” when it is considerably more efficient that many VIs that ship with LabVIEW, such as the VariantType Library or “Get LV Class Path”. But ignore the quibble.

    — James

    • I don't see the need to pass the Object back out. I would assume this may help memory allocation but not 100% sure (can anyone comment). Regardless other comparison functions (both OpenG and LabVIEW) don't do this

    Hi JG, great work!

    Please see my previous post where I introduced the object outputs. At least when calling these VI’s in a For Loop on an array of objects, the lack of an output seems to require a copy of the object, leading to poor performance for large objects. This adds (in my tests) more than 150 microseconds of execution time for objects containing a 1 million U8 byte array. This compares with times of between 1 and 10 microseconds for the various proposed VIs with object outputs (note, I don’t even need to connect the outputs).

    Admittedly, I don’t particularly like having outputs, especially on comparison VIs, but the performance difference is potentially large. Large enough for me to have to worry about it and have custom “high-performance" versions of the VI’s to use if it matters, or just use the raw code in each case. This would eliminate most of the advantages of putting these VIs in OpenG. You did a similar thing in replacing the “Get Default Object” subVI in the Name VI; I want to avoid that kind of thing as much as possible.

    Note, also, that having object outputs, though not consistent with other OpenG VIs, is consistent with how LVOOP methods are written, where even “Read” methods have an object output.

    So, I would strongly argue for these outputs, and encourage any lurkers to comment on this one way or the other.

    I’ll write more later, gotta go...

    — James

  6. Strange. I put ShaunR’s VI (which seems to just call into LabVIEW.dll) into my speed tests using small objects, and it executes in about 8 microseconds, while the “GetTypeInfo” VI (the password protected one, which I would have thought does exactly the same thing) executes in 240 microseconds!!!

    Comparable OpenG functionality (based on the int16 type array) executes in 22 microseconds.

    I wonder what extra overhead is in that password-protected VI?

    — James

  7. The variant type library in vi.lib is the type library of LabVIEW. The Int16 array is no longer maintained for new types, and hasn't been for years. Yes, it still works on many, even most, types, but it isn't what I would suggest you use, certainly not in the long run.

    Are there plans to improve the performance of the VariantType library? The few VIs I’ve tested seem to be about an order of magnitude slower than equivalent OpenG functionality based on parsing the Int16 type array. I never use them as result.

    — James

  8. That performance optimization makes sense to me given what I know of the compiler. It's a bit weird, but I know why it is being done. I'll talk to the compiler guys and see if they can clean things up without the need for that oputput (LV2013 or later, so keep your optimization for now).

    I’ve had to modify all the proposed VIs to have output terminals for the objects; otherwise they all had poor performance for large objects. This was even true when they were “inline”, which means I don’t really understand what inlining actually does.

    — James

  9. Isn't the VI up above one that, given an object, returns the default value of that object? And Get LV Class Default Value.vi returns the default value if all you know is the class. What are you looking for?

    I was joking (should have used a smilie). Of course I’m already using “Get Default Object.vi” inside “Class Name.vi” so that I’m not flattening the data of the input object (which was your initial criticism).

    Actually, this is why I was confused be my later time tests where “Class Name” is slower for large objects. Must be a copy going on somewhere. So I experimented and found that I could make it faster by passing the input object through to an output terminal:

    post-18176-0-38953600-1327590317_thumb.p

    I modified “Qualified Class Name” in the same way. Now they give execution tines of around 4-5 microseconds, independent of object data size (up to 1 Meg, at least).

    In any case, the VI that I wrote for jgcode gets the qualified name. And it is an inefficient way to do so that I would not recommend using for most lookup type applications. It works fine for probes/scripting/reflection type utilities, but not production code, which is what anything in the palettes should be intended for.

    That's why I asked if there were any suggestions for addressing my concerns in the name and iconography, to make sure that this is used only for the probes use case. … This is a fine mechanism *for* *probes*. But it doesn't really belong in the palettes where it might be mistaken for other purposes if you give it the nice friendly name that it has at the moment.

    Should the standard for the OpenG palette the same as for the LabVIEW palette? The current OpenG Data palette is full of VIs for doing non-standard things with variants by using the inefficient method of parsing flattened data. These new object VIs don’t look out of place. What if we call the VI “Class Display Name” to try and make the point that this is really a name for human readability rather than look-up?

  10. Here's the deal: This VI flattens your entire object data to a string. That's not cheap and it gets even more expensive as the object carries more data.

    If only we had some VI that returns the default value of object class so we don’t need to flatten the data. Someone should make one of those.

    I would oppose the general disemination of the "Get Name of Class.vi" without a lot of caveats. Specifically, I'd be tempted to name the VI "Get Name of Class in a highly inefficient but generic manner good for probes but inappropriate if you are really writing a by-name-lookup object hierarchy.vi" ;-)

    Wouldn’t by-name-lookup code use the Qualified Name, for uniqueness? Getting just the name is really for human readability, i.e. probes and the like. Also, anyone with more than a tiny bit of LabVIEW ability will take less than five minutes to identify “Get LV Class Path” as a way to get the name of a class. Now THAT is a inefficient method!

    I did a quick speed test of getting a name string for objects with internal data of 10kB (and 1000kB), with time in microseconds:

    Class Name: 7 (160)

    Fully Qualified Class Name: 9 (170)

    Get LV Class Path: 129 (630)

    “Get LV Class Path” is on the standard palettes, so if people are doing by-name-lookups I suspect some of them are taking the shortcut of not having a “Get Class Name” override at every level.

    And, hey, some of us just want to write some probes!

    If I am really writing a hierarchy where I expect to do name lookups -- like for database storage type systems -- as part of my actual runtime code, I'm going to add a dynamic dispatch method to the root class called "Get Class Name.vi" and override that at every level of the hierarchy to return a string constant. This avoids the memory allocation and performance hit of the flattening to the string (indeed, if all goes as planned, in 2013's compiler, it will avoid any memory alloc entirely as long as the returned strings really are constants). This solution is what is used in C++.

    Using any sort of reflection API to get this information in C# is less efficient than adding the "get class name" method. As far as I know, nothing beats adding this to your class definition. We don't do it for all classes generally because it extends the dispatch table with a feature that most classes do not need.

    Sounds great, but do you really want to add a method to every single command message class you write in the Actor Framework just to support the ability for someone to write a custom message probe?

    — James

    PS> Like all your VI renaming suggestions.

  11. I (personally) think the Class Library Refnum API should be separate - to the VIs you have provided that work on Object Data.

    I was imagining separate sub-palettes of a overall “LVOOP” palette, but perhaps they should be completely separate.

    Actually, a few days ago I was looking at the Class Library Refnum API to see if it could help me with another idea I have for a useful Object VI. One could do some interesting stuff, but unfortunately Class Refnums don't work on RunTime so one can’t put anything built on it in an exe.

    — James

  12. What I’d like to see is for a LVLib to not automatically load its contained LVClasses until and unless something else in memory refers to that class. In other words, have LVClasses follow the same loading rule as regular VIs in libraries. That would allow me to organize classes in libraries without issue. It would be nice if LVClass libraries could load piece-by-peice, but that is less important and probably not doable.

    — James

  13. Thanks Ton, we should change everything to “Object”.

    Regarding “Same or descendant class”, I forgot to make the inputs “required”, and the capitalization of the VI name is inconsistent with the other VIs; should be “Same or Descendant Class”.

    Putting these VIs in LabVIEW Data would be fine, though that library is quite large already. It might be better to have a new “LVOOP” library, especially if there is a chance of adding more tools in future, such as the Class manipulation tools jgcode linked to.

  14. But what I did not realize is that if I use a class in a library then all classes get loaded into memory. I have some libraries with a lot of classes in them and now have to think about breaking things up somewhat. Again, it has not proven to be an issue yet but I want to be proactive.

    I went through something similar about a month ago. Most of my classes are not in Libraries now, which is not really what I would prefer, but it is necessary if I want to only load necessary software (every so often I open a new project, open a fresh VI, and drop a single object into it and see how much loads up under “Dependancies”). It’s more than just Libraries; any VI in a class that refers to another class acts as a “linker”, causing that other class to fully load in memory. It just takes a few linkers to get a sort of “domino effect” as class after class loads up.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.