Jump to content

Neil Pate

Members
  • Posts

    1,156
  • Joined

  • Last visited

  • Days Won

    102

Posts posted by Neil Pate

  1. Thanks JKSH 

    I echo your sentiment. Versioning is always going to be a problem, but just recnelty Tensorflow hit 2.0 so that was what I was planning on supporting as the minimum version.

    Going down the python route is also interesting but a bit fraught. Probably due to my inexperience with python and it's tool chain  I spent the better part of a month just trying to follow myriad tutorials online to get python and tensorflow working properly on my PC, with very little to show for my results. 

    I do think it would be reasonable to say that a toolkit such as this requires LV 2018 or greater. 

  2. Hi everyone, 

    I have recently been playing around with the Tensorflow support built into the Vision Development Module. It seems to work fine and I have the basics up and running. There are a couple of problems though that I would like to solve. 

    Firstly, it seems to not use GPU acceleration at all which mostly makes it useless for real time processing at any sensible frame rate. 

    Secondly, it is locked into the VDM which is not cheap and also totally closed source. Under the hood the regular Tensorflow DLLs are called, but via a layer of the Vision toolkit. 

    Third, for this closed source reason we are locked into whey ever version of Tensorflow NI chooses. I totally get that from their business point of view, but conversely I suspect that there is very little push inside NI to update this regularly due to all the hurdles that come with it.

    My proposal is to implement some kind of community edition of Tensorflow API that wraps the Tensorflow DLL directly in LabVIEW and exposes hardware acceleration capabilities.

    Anybody interested in collaboration? I know a little bit about Tensorflow, but not enough to be productive, so my first step would just be to mimic the API provided by NI which is deliberately quite simple. 

    • Like 1
  3. The error probably goes away in your last scenario as dropping the class constant onto the block diagram causes the LabVIEW compiler to automatically include that class in the build. This kind of thing happens all the time with factory pattern type stuff in LabVIEW, where everything works perfectly in the IDE because all the classes are in memory. 

    If that is the problem you can solve it in a number of ways, one of the easiest I have found is to make a dummy VI that you can then include in the build or place somewhere on your top level VI, in this VI just drop down all the class constants of the classes you are using. The kinda breaks the lazy loading paradigm, but you win some you lose some.  

    Just as a side note, I avoid PPLs like the plague as I just cannot see a good use case for them for "normal" applications. The sheer number of problems that arise with PPLs have caused me to put them in the same category as Shared Variables; nice in principle but never to actually be relied upon... 

  4. So out of curiosity, how does everyone else handle their name generation? Multiple cameras with multiple image processing steps each needing temporary storage. I usually try and programitically generate the name but now that I think about this thread there must be a better way that I don't know about. 

  5. The one thing that always confused me was writing the actual image to the terminal. If it was purely reference based there would be no need to sequence it as I have done it in my demo. The indicator gets updated when the same actual data value hits it, which is a bit unreference like (maybe...). As Rolf says, a strange duck. 

    This does help me know what happens if for example I broadcast an Image reference on a user event. No data copy of the whole image occurs, and if I do stuff to ref coming out the receiving event it might effect my original image, it is not a copy. 

  6. Hi Tuan,

    I would be a bit surprised if you could write to disk at 3.2 GB/s for any reasonable length of time even with a fast SSD. You say longer than 10s, how much longer? If you can buffer in memory all your data you could stream to disk as needed at a slower rate. 

    Something I came across recently which I was surprised even existed, is a feature where an FPGA can use a DMA FIFO transfer to write directly to a TDMS file using a kind-of DVR (external DVR perhaps). Not sure if your hardware supports that but it might help with performance.

  7. I have a legacy application I am updating to use LV2019. It has an installer for the application which installs the run-time engine and the application itself (and a bunch of other stuff). This is done using InnoSetup and not NI's installer creator. Previously, I was able to run the RTE install in totally silent mode where no popups or anything came up on the screen, I would like to do the same thing but cannot use the same parameters as 2019 uses the new NIPM format.

    I can get the installer to run through from start to finish with no user interaction using:

    install.exe --passive --accept-eulas --prevent-reboot

    However during install it pops up its own dialogue.

    Does anyone know how I can suppress this dialogue?

  8. 18 minutes ago, hooovahh said:

    The only thing to add to Neil is that the Wait Until Completion should be True if you are going to read the standard output after it is complete.

    Indeed.

    In my circumstance though, I don't really care too much for the output, I have some downstream code that waits a bit and checks for correct generation of a PDF file I am creating. 

  9. Now that you mention it, I recall a work around I had to put in place some time ago on a touch screen only application. Sometimes my custom dialogs got hidden behind the main panel, the way we solved it in the end (ugly but worked), was just to periodically force all the dialogs to be on top. Each dialogue type actor was responsible for getting itself on top. Not pretty...

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.