Jump to content

PiDi

Members
  • Content Count

    76
  • Joined

  • Last visited

  • Days Won

    13

PiDi last won the day on August 9 2018

PiDi had the most liked content!

Community Reputation

43

About PiDi

  • Rank
    Very Active

Profile Information

  • Gender
    Male
  • Location
    Poland

LabVIEW Information

  • Version
    LabVIEW 2015
  • Since
    2011

Recent Profile Visitors

2,451 profile views
  1. Something like that. CreateALLObjects.vi
  2. Those RT utility VIs get broken whenever you try to open code in My Computer project context instead of RT. On which target do you develop your framework? Do you have any shared code between PC and RT? If yes - try to get rid of it.
  3. I've seen this behaviour many times. We use OOP and lvlibs heavily in our projects and unfortunately often run into those builder quirks. The fun fact is that sometimes the same project built on different machines produces different outcomes (on one it always breaks, while on other it always succeed, while on third it's 50-50 chance). I never really bothered to investigate and report this to NI as those problems are very non-reproducible. Going to your specific question aboutt VIs which end up outside exe (in no particular order) : 1. Check the dependencies between libraries. The builder hates circular dependencies between them (i.e. something in lib A depends on something in lib B and vice versa). Avoid them - usually the solution is to move the offending VIs between those libraries. You'll also end up with better code design, so double the profit. 2. Enable builder log generation (I think it's in Advanced settings) and see if you can make any sense of what it puts out regarding those "outside" VIs. 3. Try building on different machine. 4. Set "Enable debugging" for the build (also in Advanced settings I think). 5. Try to recreate the problematic VIs under new names - create the blank VI, copy the contents (block diagram) of the current VI, replace the calls. Do not simply rename the current VI and do not Save as..., both of those might retain some hidden problems inside the VI. Also,try to change reentrancy settings and try to make them inline. 6. Try to recreate callers of those problematic VIs in the same manner as above. 7. Separate compiled code from the VIs. 8. On the contrary, do not separate the compiled code. Try to do both and see if there is any difference. 9. Last resort - put the contents of the VI inside the caller.
  4. It is actually possible to force DLL to use shared memory space, so it still may be the problem. I had a simillar problem once: when I would run the DLL in development OR in exe, it was ok. But when the exe was running and I was simultaneously runt the dev VI, it would crash. I never bother to look for the actual solution, as I only needed only one exe in production. But I've heard that renaming the DLL for each instance of application might be solution, as it allocates the memory space, shared included, according to the name. I'm not really sure if it's true, but if you're desperate, it is worth a try.
  5. Hello LAVA-ers! If you haven't heard already, the CLA Summit 2020 will take place in Budapest on March 24th to March 26th. You can read more and register here: https://events.ni.com/profile/web/index.cfm?PKwebID=0x94716776a&varPage=home . This year we have an Open Source Session dedicated to the people who want to show their projects and encourage the collaboration. The rules are simple: Contact either me (through private message or email: piotr.demski@sparkflow.pl), Thomas McQuillan (who's also wrote about it on linkedin: https://www.linkedin.com/feed/update/urn:li:activity:6628225386339803137/ ) or Anastasiia Vasylieva (anastasiia.vasylieva@ni.com) You'll prepare three minutes presentation to showcase your project (we'll provide you with two-slides presentation template to fill most important info) (optional) We also have a poster session, so you can prepare an A1 sized poster to display at the Summit (please also contact us if you're interested) I know that not everyone here is CLA and is able to come to the Summit, but I encourage you to submit your project anyway. We'll compile all the presentations into one document and we'll distribute it in the LabVIEW community. So it's also a good way to promote your project!
  6. Hi Porter, where is this Pallete API? I've never used it, so I'm a bit in the dark Would you like to help creating this plugin to G Code Manager?
  7. This (controlling the VISA resource from different computer) is generally possible using VISA Server (https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LfHSAU&l=pl-PL). But I have no idea if VISA Server is even available on Linux. So chances that it is available or will run on Raspbian are closer to zero...
  8. PiDi

    NI DAQ alternatives

    I got it from the license agreement, here: I've found the topic on that matter: https://www.labviewmakerhub.com/forums/viewtopic.php?f=12&t=1463&p=7385&hilit=license+agreement#p7385 , and it seems that intention was to exclude only BBBlack and Raspberry from commercial use... But I don't think it is in any way clearly stated in the license agreement (I'm not a lawyer too, so maybe I'm just missing something).
  9. PiDi

    NI DAQ alternatives

    LINX is not for commercial uses, at least that's what they say in the license agreement: And actually the customer who don't believe this is robust solution would be absolutely right
  10. PiDi

    NI DAQ alternatives

    You can also get other PLCs at this price range. Though I never actually used them, Rockwell Micro820 and 830 series looks interesting (industrial grade, some build-in DIO and AIO, ability to extend by pluggable modules, Ethernet/IP communication). And AFAIK they come with free software (Connected Components Workbench). There are of course others - Siemens Logo!, Mitsubishi also have some cheaper models...
  11. PiDi

    NI DAQ alternatives

    You might want to google "Remote I/O" (not "Distributed I/O", those are two slightly different things). Every industrial automation company have some (Advantech, Phoenix, Eaton, Moxa... Just from top of my head). If this is slow, simple monitoring of some DIO states, you might be able to find Modbus solution which would not generate additional hardware costs (just connect Ethernet or serial cable, grab free toolkit and you're good to go). But if you need something more complex, then you might quickly get to the point where adding NI based solution to your already NI based project is not really more expensive.
  12. Ahhh, so if you don't actually need to process data in 4 uSec, but can have a little more latency, that changes a lot! Using Pipelined Architecture in Xilinx FFT Core you can stream the data continuosly and it will generate result continously - with initlai latency: (from Fast Fourier Transform v9.0 LogiCORE IP Product Guide). In other words: if you'll start streaming the data continuosly, you'll get the first output value after X usec, and then next value every clock cycle. Though you'll need to properly implement AXI4-Stream protocol around the IP Core (basically this: http://zone.ni.com/reference/en-XX/help/371599K-01/lvfpgaconcepts/xilinxip_using/ , but there are some caveats when interfacing with Xilinx IP Cores) and some buffering. I also agree with smithd that with enough "latency budget" the GPU way might be more cost-efficient.
  13. If we would go with the Xilinx FFT IP Core path: the FPGA itself is not limiting factor in the clock frequency, the IP Core is. Take a look at Xilinx docs: https://www.xilinx.com/support/documentation/ip_documentation/ru/xfft.html . Assuming Kintex-7 target and pipelined architecture you probably won't make it to 400 MHz. From my own experience - when trying to compile the FFT core at 300 MHz I got about 50% success rate (that is - 50% of the compilations failed due to the timing constraints) - but this is FPGA compilation, so when you're at the performance boundary, it is really random. We can also take a look at older versions of FFT IP - Xilinx even included latency information there: https://www.xilinx.com/support/documentation/ip_documentation/xfft_ds260.pdf . Take a look at page 41 for example - they didn't go under 4 uSec. Ok, thats Xilinx, but you say: "According to NI, the FFT runs in 4 μs (2048 pixels, 12bit) with the PXIe 7965 (Virtex-5 SX95T)." - I can't find it, could you provide reference? GPU itself should be able to do the FFT calculation in that time with no problem, the limiting factor is data transfer to and from GPU. I wrote all of the above to provide a bit of perspective, but I'm not saying that this is impossible to do. I rather say that the only way to know for sure is to actually prototype it, try different configurations and implementations and see if it works. So, wrapping this up, I would review the requirements (ok, if you say that it is absolutely 4 uSec without any doubts, then let's stick with it - and I really think this awesome to push the performance to the limits ). Then try to get a hold of some FPGA target (borrow it somewhere? from NI directly?) and try to implement this FFT. And the same for the GPU (for the initial tests you could probably go with any non-ancient GPU you can find in PC or laptop).
  14. With Xilinx FFT IP Core the latency is usually about two times the FFT length and the max archievable clock frequency is about 300 MHz. With 1024 points FFT that gives you about ~7 uSec latency. And we're talking about 1D FFT only, so we'd also need to account for image acquisition, data preparation for FFT and post-FFT processing and decision making. And by the way, 4 uSec is 250000 frames per second. There are two possibilities: either your requirements need a bit of grooming... Or you're working on some amazing project which I would love to take part in
  15. If you want to simply change class banner color, check this:
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.