Jump to content

Neil Pate

Members
  • Posts

    1,185
  • Joined

  • Last visited

  • Days Won

    110

Posts posted by Neil Pate

  1. Not sure if this is the right place for this, so mods please feel free to move if it is not.

    I have just started to play around with the Web Module in NXG 4.0. There are quite a few tutorials around, but (unless I have missed something) all of them seem to gloss over the task of getting a trivially simple web VI actually running locally. After a bit of head scratching I did manage to get something up and running eventually, but have a question which is hopefully simple for anyone who has used the Web Module.

    When running the web VI inside the NXG IDE, is there anything actually hosting it? In other words can I visit some address from my web browse to see it running while I am developing it? Or do I have to build (i.e. turn into html and JS) and then copy the files manually to my NI Web Server directory?

     

     

  2. Thanks JKSH 

    I echo your sentiment. Versioning is always going to be a problem, but just recnelty Tensorflow hit 2.0 so that was what I was planning on supporting as the minimum version.

    Going down the python route is also interesting but a bit fraught. Probably due to my inexperience with python and it's tool chain  I spent the better part of a month just trying to follow myriad tutorials online to get python and tensorflow working properly on my PC, with very little to show for my results. 

    I do think it would be reasonable to say that a toolkit such as this requires LV 2018 or greater. 

  3. Hi everyone, 

    I have recently been playing around with the Tensorflow support built into the Vision Development Module. It seems to work fine and I have the basics up and running. There are a couple of problems though that I would like to solve. 

    Firstly, it seems to not use GPU acceleration at all which mostly makes it useless for real time processing at any sensible frame rate. 

    Secondly, it is locked into the VDM which is not cheap and also totally closed source. Under the hood the regular Tensorflow DLLs are called, but via a layer of the Vision toolkit. 

    Third, for this closed source reason we are locked into whey ever version of Tensorflow NI chooses. I totally get that from their business point of view, but conversely I suspect that there is very little push inside NI to update this regularly due to all the hurdles that come with it.

    My proposal is to implement some kind of community edition of Tensorflow API that wraps the Tensorflow DLL directly in LabVIEW and exposes hardware acceleration capabilities.

    Anybody interested in collaboration? I know a little bit about Tensorflow, but not enough to be productive, so my first step would just be to mimic the API provided by NI which is deliberately quite simple. 

    • Like 1
  4. The error probably goes away in your last scenario as dropping the class constant onto the block diagram causes the LabVIEW compiler to automatically include that class in the build. This kind of thing happens all the time with factory pattern type stuff in LabVIEW, where everything works perfectly in the IDE because all the classes are in memory. 

    If that is the problem you can solve it in a number of ways, one of the easiest I have found is to make a dummy VI that you can then include in the build or place somewhere on your top level VI, in this VI just drop down all the class constants of the classes you are using. The kinda breaks the lazy loading paradigm, but you win some you lose some.  

    Just as a side note, I avoid PPLs like the plague as I just cannot see a good use case for them for "normal" applications. The sheer number of problems that arise with PPLs have caused me to put them in the same category as Shared Variables; nice in principle but never to actually be relied upon... 

  5. So out of curiosity, how does everyone else handle their name generation? Multiple cameras with multiple image processing steps each needing temporary storage. I usually try and programitically generate the name but now that I think about this thread there must be a better way that I don't know about. 

  6. The one thing that always confused me was writing the actual image to the terminal. If it was purely reference based there would be no need to sequence it as I have done it in my demo. The indicator gets updated when the same actual data value hits it, which is a bit unreference like (maybe...). As Rolf says, a strange duck. 

    This does help me know what happens if for example I broadcast an Image reference on a user event. No data copy of the whole image occurs, and if I do stuff to ref coming out the receiving event it might effect my original image, it is not a copy. 

  7. Hi Tuan,

    I would be a bit surprised if you could write to disk at 3.2 GB/s for any reasonable length of time even with a fast SSD. You say longer than 10s, how much longer? If you can buffer in memory all your data you could stream to disk as needed at a slower rate. 

    Something I came across recently which I was surprised even existed, is a feature where an FPGA can use a DMA FIFO transfer to write directly to a TDMS file using a kind-of DVR (external DVR perhaps). Not sure if your hardware supports that but it might help with performance.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.