Jump to content

ensegre

Members
  • Posts

    550
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by ensegre

  1. I would say: the fields which are more beneficial are those which are useful to the problem you have to deal with. You do signals, functional analysis is good. You do computational geometry, geometry is good. You do image processing... you name it. Labview is only a programming tool. It's not that you become more proficient in Labview because you know a special branch of maths, such as, you know graph theory, you are good to grasp diagrams. [in fact LV diagrams are just a representation of dataflow, more kin to an electronics diagram than to formal graph theory]. Rather, on general terms, I would say numerical analysis and sound principles of algorithm design really help. How to make an efficient algorithm for doing X, how truncation errors propagate, how to optimize resource use, etc. But this is true of any programming language used for practical problem solving. Formal language theory, compilers - not really, LV conceals these details from you. Unless your task is to implement a compiler in G...
  2. As an aside: I realize that the computation of the current pixel coordinates could be avoided using, like you did, however it seems that these coordinates are not always polled at the right time; for instance I get {-1,-1} during mouse scroll. That might be part of the problem...
  3. This is an imperfect solution from a project of mine. A scroll of the mouse wheel zooms in or out by a factor sqrt(sqrt(2)), centering the zoom on the pixel upon which the cursor lies. The arithmetics of that is easy, it just involves that {ox,oy}->{px,py}-{px-ox,py-oy}*z1/z2, where {ox,oy} is the origin and {px, py} are the image coordinates of the pixel pointed at. That is, the new origin is just moved proportionally along the line connecting the old origin and the current pixel, all in image coordinates. Differently than you, I haven't implemented limits on the zoom factor based on the image size and position, perhaps one should.
  4. [collided in air with infinitenothing] single port GigE is ~120MBytes/sec. you're not talking of bits/sec? Or of a dual GigE (I ran into one once)? GigE (at least Genicam compliant) is supported by IMAQdx. Normally I just use high level IMAQdx blocks (e.g. GetImage2) and get good throughput, whatever is under the hood. But a camera driver might be in the way for a less efficient than ideal transfer.
  5. Git. Because my IT, begged for centralized SCC of some sort, settled for an intranet installation of gitlab, which I'm perfectly fine with. But I'm essentially a sole developer, so SCC is to me more for version tracking than for collaboration. Tools: git-cola and command line. Reasonably happy with Gitkraken, using git-gui on windows as a fallback.
  6. This way perhaps? (or maybe this one since now we're all obsoleting out)
  7. Also (I'm on linux, desktop), Crash logger.vi is broken because of a missing /<vilib>/nisysconfig/Close.vi and Close (System).vi (This VI needs a driver or toolkit component that is not found. Missing resource file "nisysapi.rc). And, perhaps as a consequence, the "System Session" nodes miss any of the properties they're supposeed to have. Was the code really tested on linux? [maybe on some RT NI-linux, which I miss?]
  8. Prova così... le ampiezze e fasi giuste ce le devi mettere tu.
  9. A potential caveat: I've used this pattern in the past to generate a VI with a default string value containing build date and git version, and included it in the project I was building. Only, when I tried to use it as a prebuild action, most of the time I got spectacular LV crashes, recoverable only by clearing the compiled object cache. I presume that something becomes stale there if the VI is marked as unmodified but in fact it is during the build. I gave up tracking down the issue, just resolved to run my tag-generating VI just before build manually, with the project closed. That was in LV2014 and 2015 at the time. I saved some logs; the cryptic errors I used to get were of this sort: Error 1124 occurred at ... Possible reason(s): LabVIEW: VI is not loadable. (a perfectly loadable and unrelated VI) DAbort 0x1A7102DF in fpsane.cpp ... Someother.vi (another sane and unrelated VI) The build was unsuccessful. Possible reasons An error occurred while building the application. Either you do not have the correct permissions to create the application at the specified location or the application is in use. Invoke Node in AB_EXE.lvclass:Build.vi->AB_Engine_Build.vi->AB_Build_Invoke.vi->AB_Build_Invoke.vi.ProxyCaller <APPEND> Method Name: <b>Build:Application</b> Details Click the link below to visit the Application Builder support page. >Use the following information as a reference: Error 8 occurred at AB_EXE.lvclass:Build.vi -> AB_Engine_Build.vi Possible reason(s): LabVIEW: File permission error. You do not have the correct permissions for the file. \=========================\ NI-488: DMA hardware error detected. (NI-488 DMA? WTH?) Error 1 occurred at EndUpdateResourceA.vi Possible reason(s): LabVIEW: An input parameter is invalid. For example if the input is a path, the path might contain a character not allowed by the OS such as ? or @. \=========================\ NI-488: Command requires GPIB Controller to be Controller-In-Charge.
  10. If you do images, or call something inside a DLL, nothing would be too insane. But I guess you already did your homework in trying to track that down. What look strange are the saturation at 3 GB and then the sudden drops and recovers. Makes me suspect of a corner problematic case of LV's garbage collector... I don't know if it helps, your post reminded me of this old discussion. There I hijacked the thread to complain about what definitely turned out to be a bug of LV webserver, which appeared in one LV version and was silently covered up a couple of versions afterwards. That thread goes a bit on on the tone "trimming has nothing to do with a bug", "yes there is a bug", but essentially is about a call to the windows API to trim the process working set, which might be of some use to your testing.
  11. Will the smart cam run OCR onboard? If that is not required, a properly placed webcam and a couple of leds might just do, for much less. As Tim_S wrote, the art is setting things up for getting always a clean image. The OP doesn't say if his next question would be then how to use IMAQ, image preprocessing, OCR and all what involved.
  12. It occurs to me that maybe only NI-SCOPE cards have real trigger inputs. But for normal DAQ cards, you could use some scheme in which, even with software start, you first start the acquisition on the event channel and on a fiducial channel, then you output the control signal which is also routed to the fiduciary input. Since relative timing of the sampled data is deterministic (channels are either simultaneously sampled in high end cards, round-robin in lower), analysis of the two sampled signals should give you the answer.
  13. Why not just using your DO signal as trigger for starting the DI acquisition? Subsequent analysis of the acquired waveform would measure the time of what you define as event, isn't it? 1ms and the desired DI time resolution alone dictate the required sampling rate, but nowadays even the entry DAQs are capable of ksps.
  14. Guessing that this stream comes from a VISA read, you'll probably want this:
  15. isn't exporting from LV to file to be read within html, enough of a performance hit?
  16. I don't have direct experience with it, but I guess that if all you are into is to pass image data and display, that would be practicable, but if your aim to interface even only a subset of opencv directly with LV, that would be quite a different story. The hardness of the task has been mentioned in the past, e.g.
  17. I gave some serious thought to the picture control, which is still LV and probably UI thread and cpu demanding. I didn't really went that far, but at least sort of implemented zoom and pixel value indicator, and taxing cpu I get in the 30fps range. Not even nearly as nice as the imaq display, but at least an alternative. It's in my https://lavag.org/files/file/232-lvvideo4linux/
  18. just tried on my laptop to confirm: sudo ./INSTALL <--nodeps> works for 2016 64b and 32b, and for NIVISA1600L. However visaconf, etc, as well as LV VIs rererencing VISA coredump even after updateNIDrivers and reboot. Excellent opportunity to confirm that also sudo NIVISA1600L/UNINSTALL works.
  19. I'd rather say, instead, that my pain experiences were with older versions and RH based distros like CentOS. There I remember having to fiddle with the C of some sources, guessing from parallel reports on the ni site, and yes, hardlinking the right MESA (it's an OpenGL emulation library) or something the like. As a reminder to myself, former noise of mine on the dark side, Re: nikal on 2.6.17 kernel, Re: NIDAQmx on Fedora Core 5 How-To. With newer versions it got less and less painful; if anything, barely little more than aliening the rpms. Of course as long as you're not serious about hardware support.
  20. I have LV12, 13, 14, 15 on my desktop (probably ubuntu 12 release upgraded to 14) and 14, 15, 16 on my laptop (ubuntu 14 upgraded to 16). So possible it is possible. How do I do, I forget year after year, immediately after I succeed. IIRC it usually involves an "Oh, ./INSTALL.sh gives an obscure ]] not expected syntax error", and an "ok, let's copy all the *.rpm to /tmp and alien -i them". On those two systems I can confirm the installations survive to distribution upgrades, the files just sit in /usr/local/natinst/. I usually never bother to get VISA really working, I only use the installations for code-checking and algo developement, GUI without being too picky about font look (which however somewhat improved with 2016). If I'd be to bother about VISA, I think I always stumbled in noncompilable kernel modules. Maybe the last time I somehow succeeded in having VISA working on a nonsupported distro was in 2008.
  21. You are the master of your code, you can do what you want. Perhaps you're asking - if I get a full image from my camera, can I extract a ROI with IMAQ - short answer, yes, http://zone.ni.com/reference/en-XX/help/370281P-01/imaqvision/region_of_interest_pal/. But you may also want to look into getting only a ROI from the camera, to reduce the payload. To do that, you send settings to the camera, and get only that part of the image; you can't expect to draw something on a LV widget and magically have the camera know about it. Unless you code for that. I think you are confusing the IMAQ image buffer size with the actual content acquired by the camera and transferred to the computer. IIRC the IMAQ images auto-adapt themselves in size if they are wired to receive image data different than the sizes they are set to. Of course you can also get your images at high resolution and resample them, but that adds processing time. You may have to grasp how IMAQ images are handled btw - normally transformation are never in place, they require the preallocation of a source and of a destination buffer.
  22. The application has to render a larger number of scaled pixels onscreen. The performance drop may be particularly noticeable on computers with weaker graphic cards (e.g. motherboards with integrated graphic chipsets) which defer part of the rendering computations to cpu. If you process smaller images, your processing time may also be proportionally smaller. Additionally, image transfer and memcopy times will be smaller. But images at lower resolution contain less information. You are the one who has to decide how small the resolution can be, in order to still derive meaningful results from the analysis. If the bottleneck is rendering, you could constrain the GUI so that the display never gets too wide, or the zoom factor never too high. Another popular approach is to process every image from the camera, but to display only one image every so many. Depends on the camera. Look for image size, ROI (AOI), binning. If you can with MAX, you can with LV (GigE, almost surely Genicam -> IMAQdx properties). [with the advantage that in MAX you can save the default setting]. If you can't with MAX/LV, it might be that a camera SDK allows control of additional features, check your camera documentation.
  23. Could you trim that down to some code without hardware dependence, which we can look at? Oh, and you mentioned dynamic events. Are they involved here, so that you have reasons to blame them?
  24. Doh, completely right. That would be the first of the alternatives given in https://en.wikipedia.org/wiki/Shoelace_formula, plus argument checks. It occurred me to think that among the various alternatives, both the one I thought and this one are the most efficient needing only two array allocations and one multiplication. The determinant form in http://mathworld.wolfram.com/PolygonArea.html needs two.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.