Jump to content

PiDi

Members
  • Content Count

    70
  • Joined

  • Last visited

  • Days Won

    13

PiDi last won the day on August 9 2018

PiDi had the most liked content!

Community Reputation

41

About PiDi

  • Rank
    Very Active

Profile Information

  • Gender
    Male
  • Location
    Poland

LabVIEW Information

  • Version
    LabVIEW 2015
  • Since
    2011

Recent Profile Visitors

1,949 profile views
  1. This (controlling the VISA resource from different computer) is generally possible using VISA Server (https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LfHSAU&l=pl-PL). But I have no idea if VISA Server is even available on Linux. So chances that it is available or will run on Raspbian are closer to zero...
  2. PiDi

    NI DAQ alternatives

    I got it from the license agreement, here: I've found the topic on that matter: https://www.labviewmakerhub.com/forums/viewtopic.php?f=12&t=1463&p=7385&hilit=license+agreement#p7385 , and it seems that intention was to exclude only BBBlack and Raspberry from commercial use... But I don't think it is in any way clearly stated in the license agreement (I'm not a lawyer too, so maybe I'm just missing something).
  3. PiDi

    NI DAQ alternatives

    LINX is not for commercial uses, at least that's what they say in the license agreement: And actually the customer who don't believe this is robust solution would be absolutely right
  4. PiDi

    NI DAQ alternatives

    You can also get other PLCs at this price range. Though I never actually used them, Rockwell Micro820 and 830 series looks interesting (industrial grade, some build-in DIO and AIO, ability to extend by pluggable modules, Ethernet/IP communication). And AFAIK they come with free software (Connected Components Workbench). There are of course others - Siemens Logo!, Mitsubishi also have some cheaper models...
  5. PiDi

    NI DAQ alternatives

    You might want to google "Remote I/O" (not "Distributed I/O", those are two slightly different things). Every industrial automation company have some (Advantech, Phoenix, Eaton, Moxa... Just from top of my head). If this is slow, simple monitoring of some DIO states, you might be able to find Modbus solution which would not generate additional hardware costs (just connect Ethernet or serial cable, grab free toolkit and you're good to go). But if you need something more complex, then you might quickly get to the point where adding NI based solution to your already NI based project is not really more expensive.
  6. Ahhh, so if you don't actually need to process data in 4 uSec, but can have a little more latency, that changes a lot! Using Pipelined Architecture in Xilinx FFT Core you can stream the data continuosly and it will generate result continously - with initlai latency: (from Fast Fourier Transform v9.0 LogiCORE IP Product Guide). In other words: if you'll start streaming the data continuosly, you'll get the first output value after X usec, and then next value every clock cycle. Though you'll need to properly implement AXI4-Stream protocol around the IP Core (basically this: http://zone.ni.com/reference/en-XX/help/371599K-01/lvfpgaconcepts/xilinxip_using/ , but there are some caveats when interfacing with Xilinx IP Cores) and some buffering. I also agree with smithd that with enough "latency budget" the GPU way might be more cost-efficient.
  7. If we would go with the Xilinx FFT IP Core path: the FPGA itself is not limiting factor in the clock frequency, the IP Core is. Take a look at Xilinx docs: https://www.xilinx.com/support/documentation/ip_documentation/ru/xfft.html . Assuming Kintex-7 target and pipelined architecture you probably won't make it to 400 MHz. From my own experience - when trying to compile the FFT core at 300 MHz I got about 50% success rate (that is - 50% of the compilations failed due to the timing constraints) - but this is FPGA compilation, so when you're at the performance boundary, it is really random. We can also take a look at older versions of FFT IP - Xilinx even included latency information there: https://www.xilinx.com/support/documentation/ip_documentation/xfft_ds260.pdf . Take a look at page 41 for example - they didn't go under 4 uSec. Ok, thats Xilinx, but you say: "According to NI, the FFT runs in 4 μs (2048 pixels, 12bit) with the PXIe 7965 (Virtex-5 SX95T)." - I can't find it, could you provide reference? GPU itself should be able to do the FFT calculation in that time with no problem, the limiting factor is data transfer to and from GPU. I wrote all of the above to provide a bit of perspective, but I'm not saying that this is impossible to do. I rather say that the only way to know for sure is to actually prototype it, try different configurations and implementations and see if it works. So, wrapping this up, I would review the requirements (ok, if you say that it is absolutely 4 uSec without any doubts, then let's stick with it - and I really think this awesome to push the performance to the limits ). Then try to get a hold of some FPGA target (borrow it somewhere? from NI directly?) and try to implement this FFT. And the same for the GPU (for the initial tests you could probably go with any non-ancient GPU you can find in PC or laptop).
  8. With Xilinx FFT IP Core the latency is usually about two times the FFT length and the max archievable clock frequency is about 300 MHz. With 1024 points FFT that gives you about ~7 uSec latency. And we're talking about 1D FFT only, so we'd also need to account for image acquisition, data preparation for FFT and post-FFT processing and decision making. And by the way, 4 uSec is 250000 frames per second. There are two possibilities: either your requirements need a bit of grooming... Or you're working on some amazing project which I would love to take part in
  9. If you want to simply change class banner color, check this:
  10. I've been playing a bit with AutoIT (https://www.autoitscript.com/site/autoit/). But I've faced the same problem - there is no way, other than screen coordinates, to refer to any GUI items in LabVIEW. What exactly are those "Object IDs" and where do they come from, and why they are not present in LabVIEW - those questions are beyond my knowledge. But if someone was succesfull with GUI testing in LV, I'd be interested to hear that too
  11. I haven't seen this since around LV2013 (or 12?). In my case this was connected to very obscure compiler error, which was disabling LabVIEWs ability to build anything afterwards. LV would throw generic error window with something like "GenIL error" and after that boom - no build would ever work again. Bad news: the only way to fix this was to wipe the computer clean and reinstall everything (even "clean" uninstallation of NI software didn't help). Good news: this is probably some other problem somewhere at the initialization of the build, as the one I'm talking about is supposedly fixed long time ago. Have you tried to enable "Generate build log file" in the Advanced page of the build? Or just clear LabVIEW Data folder again?
  12. Put all your classes in libraries. (I know it might contradict what others say about NOT putting classes in libraries, but it's worth a try. Also, I have all my classes in libraries and haven't seen this error for a long time).
  13. I have no experience with that, but quick googling led me to this: http://www.topazsystems.com/msoffice-plugins.html - maybe something like that would be solution for you, as you're already creating reports in Excel?
  14. There is no direct method. BUT we can always hack something. I've attached a little plugin that allows you to change the default name of the newly created VI. It's based on project providers magic, so use this at your own risk (this is the dark magic we're talking about here ). Instructions: 1. Install this package. 2. Go to <LabVIEW>\resource\Framework\Providers\DefaultNamesChanger folder 3. Open DefaultNamesChanger.ini file. 4. Edit "VINameTemplate" key. 5. Save the ini file and restart LabVIEW. pidi_lib_defaultnameschanger-1.0.0.6.vip
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.