Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by smithd


    So if you put a large array in a queue, and dequeue it in another loop, no data copy is ever made?

    Yes and no. I've had to characterize this recently with a cluster of various arrays and I've found using the RT trace toolkit (9068 in LVRT2013) that:

    -If you pass a dataset into a queue, the original (upstream) producer of that data set must generate a new buffer because the current buffer is handed off to the queue

    -If that upstream producer is an empty queue, an empty buffer must be allocated. For some reason I don't understand this ends up with like 5 wait on memory flags in the trace.

    -If that upstream producer is a full queue, no new buffer will be allocated

    -If the buffer (pulled out of any of the queues) is fed to another API, like IMAQ, you'll end up losing that buffer and you'll need to allocate a new one, unless the API can avoid stomping on the data.


    tl;dr: in normal operation you won't make a copy of the full dataset by using a queue until you pass that dataset to someone else. For determinism purposes dequeuing an empty queue will cause an allocation, which is why we have RT fifos. If you can avoid another API stomping on the data, you can pass a set of buffers through a loop of queues (A->B->C->A...) without allocations.


    Obviously all of the above is for my use case only and you should always use a trace tool to confirm that your code behaves as expected if performance is critical to your application.

  2. I don't mean to be that guy, but I think creating an express VI would get you half-way there. Express VIs can go on the palettes and can contain any code. Once you drop one down, you can right click and select "open front panel", and it will prompt you to convert the VI from an express Vi into a normal VI. You can then save this converted express VI and boom, you have your desired outcome.


    Negatives that I am aware of:

    -Your code starts out in humongous express vi form (rather than as an icon).

    -Your code would, technically, be an express vi.


    That having been said, if this is a common use case for you I think making some sort of tool would be the cleaner way of doing it.

    • Like 1
  3. Do either of these help? (I have no clue why there are two, they're basically the same ;) )




    The real question I have to ask is if you need the call to be dynamic, or just asynchronous? If it does not need to be dynamic, just use a static VI reference. This will ensure app builder finds, includes, and loads the appropriate code into the exe. This is what many of the labview examples do, so take a look in the example finder (specifically the section on controlling applications programmatically, which is code for "VI server").


    If it absolutely needs to be dynamic, then you're better off following the KB steps to set the VI in question (St1-TCP.....) as always included, specify a specific folder relative to the exe (for example ../dynamic dependencies), and use a relative path rather than the fixed path you have there.

    • Like 1
  4. Hi all,

    I'm creating a LabVIEW Web Service in LV2013. So far it's been great, but there is one feature I can't find. The "Set HTTP Response Code.vi" doesn't take a string as an input. So I can provide an error code, but no detail string! To make matters worse, if I report a 4xx, the response body is not passed on, so I have to hack the error into a header. There must be a better way. Has anyone encountered this yet?




    Are you referring to the "Reason-Phrase"(http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html) ( If so, I've tried everything I can think of and have been unable to come up with a way to set that field. LV seems to default to using the ones defined by http. If you just want to send data back you can always just use "set response.vi". I just used that setting the header to 401 and writing a response string ("OH NO! ERROR!" in my case) and the browser correctly displayed that string despite the error code. Are you saying this didn't work for you?

    • Like 1
  5. I think most people would disagree with me, but I think of it like this... it feels to me as though a VI is only something where the UI is intended to be used (even if just as a dialog or debug screen). Anything else, which is intended to always be called by something else with a UI is a subVI, function, or subroutine.



    Maybe it's nitpicky, but it bothers me when someone asks me if I have a subVI to do such and such.

    I think I personally would just call it "a function to do blah". I mean, technically I don't care if its written in LabVIEW, just that it does what I want it to do :)


    This becomes tricky with RT, where the top level VI should the the only function in your entire codebase which *doesn't* have anything on the front panel, but...its still the thing you go to in order to run the system, so thats still the "UI".

  6. Something posted above makes me less confident about my use, but I have a set of objects which store configuration information and a set of objects which present a UI for that configuration. However, I don't want the config objects to know about the UI objects *and* this all has to be in a plugin system. I want it so be {some subset of N}:{some subset of N}. That is, editor A may support configurations 1 and 2, but configuration 2 may also be used by editor B.


    So what I did was I made sure the editors know what classes they can support and use preserve runtime class to test every available configuration object (1, 2, 3) against the configuration objects supported by every editor (A, B, C). This seems to successfully let me know if (unknown config object) is supported by Editor A, B, or C. I don't know if there is a better way to determine class hierarchies in the runtime environment, but this is what I came up with.

  7. Glad to hear it.


    One other tip you might keep in mind is to ensure you have a timeout of 0 in your DMA loop, or otherwise fairly tightly match the timing of the loop with the timing of the DMA. Any time period in which the DMA node is waiting on data is a busy wait--that is, is is hogging the CPU looking for data. This will definitely hurt the performance of your system (even if you are able to keep up, it can't hurt to do a little bit better :) ). This limitation is resolved in the shiny new cRIO released a few weeks ago, and the solution is documented here: http://digital.ni.com/public.nsf/allkb/583DDFF1829F51C1862575AA007AC792

  8. It does sound like you should be able to get better than that, but its hard to say without breaking up the problem a bit. One option is that the code is doing something strange which limits your max log rate. I'd try running the code and have it log to the local disk and see what rates you can get. Then you can compare to these benchmarks, which are on the main disk: http://www.ni.com/example/31206/en/#toc7 (the sbRIO should be a little better than the 9074, but likely not as fast as the 9068).


    Assuming you get the expected rates when writing locally, you can start looking at the USB. There is absolutely no way you're going to hit 480 Mbit/s -- thats the *bus* limit, not the rate at which your CPU or the attached device can log to disk. But you can still look at the performance and try to improve it to meet your needs. One option is trying a different USB stick. Another thing worth checking is the formatting. I believe the sbRIOs should be able to handle FAT16 or FAT32, but you won't get anywhere near the speed of an NTFS system. (Source: Random person on the internet: http://www.msfn.org/board/topic/125116-fat16-vs-fat32-vs-ntfs-speed-on-usb-stick/)


    Hi smarlow,
    I know this is a really old thread, but I am intrigued by the use of WebSockets and LabVIEW.  Lately I found a library that NI released:
    https://decibel.ni.com/content/docs/DOC-38927 and I found the code to be very similar to yours.  I was just curious if you knew about this ... basically this library combines WebSockets and NI's CVT so that CVT tags get pushed out and exposed through Websocket connections.  Still have write your own UI web code, tho.  There's another package that's fascinating lately: http://labsocket.com/.  This commercial software supposedly "scrapes" your front panel generates a thin-client webpage for you.  Haven't tried it but looks promising.  Cheers!



    Hey sharkera,


    That toolset actually consists of a few things.


    The first package, tagwebclient, is a javascript+HTML web page along with some LabVIEW type definitions which are intended to allow you to create new web services which provide tag-oriented data to the web using a standard interface. At present it needs some work, but the basic concept is down. For example, there is a "tag browse" page which requests the available list of tags from a correctly-formatted web service.


    The second package, cvt web addon, is a LabVIEW web service which uses the CVT and aims to meet the contract required for the tagwebclient. Originally, these packages were combined but we are interested in making a web service which supports the new tag bus library (which is basically CVT on a wire): https://decibel.ni.com/content/docs/DOC-36995


    Finally, the cvt web addon does have a sample websocket server. However, to the best of my knowledge it currently just supports sending strings back and forth. It is not used to transfer tag data -- standard labview web service functions are used for that.




  10. Sorry about that. The link was to a poll on the CLA community, as Christian mentioned. The results indicated heavy support for opening the GDS toolkit which is now covered under the process identified here: https://decibel.ni.com/content/groups/community-source-of-ni-code


    This new discussion was intended to be more open-ended than the poll. It is posted here: https://decibel.ni.com/content/thread/23441

  11. I had to write a QuickDrop shortcut to replace the property nodes with their underlying accessor VIs.  

    Do you still have this (or have you posted it somewhere)? When I found found my most recent issue with property nodes (which is fixed) I went through code manually to do the same.




    I took a quick look over it.  I removed the write before the property node reads because I thought it might be doing some priming that wasn't necessary.  I also made all of the loops use a shift register to use the same object the whole time, instead of copying it each time.  Finally, I made every VI read or write four times instead of just once, so that the differences in timing could compound and be more noticeable.  It looks like the property nodes are 8% slower.  I haven't looked at these tests with a critical eye (or a properly caffeinated eye), but that's about what I'd expect.  I bet if you included the same error checking as the property node the difference would be closer.  I hope you're coming to the LAVA BBQ  ;)

    Interesting changes. I wasn't sure whether writing the same value would be optimized out so thats why I made a new random value for each run. Is there an easy way to see what code gets eliminated?


    The reason I personally don't like property nodes is because of the error checking. I know a lot of people disagree with me but I prefer not having error nodes if you don't expect that code to throw an error. Accessors shouldn't throw errors (unless I know I want to do validation on it). Property nodes force me to add error wires which I never expect to use.


    However, property nodes do keep things clean in some cases and they really improve discoverability of data, so I see them as a necessary evil :)

  12. BONUS: I'll buy a (free) beer at the LAVA BBQ to anyone who writes a test that shows a big difference in performance between property nodes and subVIs.  Consider whether the subVIs are inlined, too.  I reserve the right to judge what a big difference is.  To be clear, the factors to consider are: 1) Ignore Errors Inside Node, 2) Case structure in property accessor, 3) Inlined accessor VI or not.  You immediately forfeit your right to a beer if some crucial part of your code gets dead code eliminated.  Competition ends August 4, 12pm CDT.  Really the only purpose of this is to settle the question once and for all.

    I was curious so I put something together (well most of it was stuff I've had lying around from previous benchmarks). I figured my first go at the benchmarks would be wrong, so there is a script to generate them.

    1. Run gentests/gentests.vi

    2. Open Benchmarker.vi and set the path to a generated report file

    3. Run benchmarker.vi


    My first attempt is attached. If you want to change the benchmarks, they are the templates in the folder. From this first run, I don't see a difference, but as I said my first attempts are usually wrong when it comes to benchmarking.


    first run.zip

  13. The other smaller question is LVOOP property nodes - I haven't noticed any problems at all with using them, and I'm still on LV2012 (may jump to LV2014 in a month).  Is something unsavory lurking beneath the surface?

    If anything, issues with property nodes have gotten better over the last few years -- at least in my usage. All of the really bad bugs I've seen have been fixed as of this most recent patch (see this recent thread for an example: http://lavag.org/topic/18308-lvoop-compiler-bug-with-property-nodes/).


    But, there are still a lot of usability annoyances related to case sensitivity. For example, if I name my folder Properties in the parent class and "properties" in the child class, you'll see two property items which link to the same dynamic dispatch VI. When a recent code review found a misspelling in the property name of a parent class, I had to go through every child implementation (~15 at this point) and fix the spelling there, because the classes were broken somehow. This and more is tracked by CAR 462513.


    Finally, I remember seeing Stephen post somewhere that property nodes, due to their inherent case structure, prevent LabVIEW from optimizing the code -- even if the VI is set to inlined. For this reason alone I avoid them in any code I expect to be running at high-priority. For UI, however, I use them extensively.

  14. I'm in the systems engineering group at NI. Over the course of the past year(s), we've received feedback (most recently here -- sorry, this link is in the ni.com/community CLA group so access may be restricted) that some members of the community would be interested in contributing to some of the projects we've already released, or potentially working with us to identify future projects that might benefit the community at large.


    We'd like to renew this conversation by opening up a discussion of what projects might interest the community. That discussion is posted here:



    If you have any thoughts or feedback, feel free to post here or in that discussion. We're very interested in hearing from you.

    • Like 2
  15. I remember this being an issue in 2011, but hadn't encountered it for a while. I recently ran into it again where the property node won't notice the transition from 1d->2d array. This may be fixed in sp1 patch f3 (look at issue 467722): http://digital.ni.com/public.nsf/allkb/4415B312CC61449A86257C820053B65E


    If updating to this patch doesn't fix it, I would highly recommend letting the applications engineers know, and file a bug report.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.