Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. I've never used this but it might help: http://www.nirsoft.net/utils/gdi_handles.html seems to give you a vague idea of what the objects are actually being used to display
  2. Even with this? http://www.ni.com/white-paper/53072/en/ It seems to indicate in the help that they provide a simple function for converting from an imaqdx image to an opencv mat. That having been said, I mention the opencv bit more as a "and hey if I need complex processing at least its handy". Ah I see, that makes sense. As an aside, I actually tried implementing motion jpeg (https://en.wikipedia.org/wiki/Motion_JPEG#M-JPEG_over_HTTP) which worked..ok (and in fact you can do png as well), but took me a while to figure out how to do it with the labview web server, and it (on my machine) ended up being slower than the regular imaq stuff, probably because of all the compression and decompression. More fundamentally what I have is sort of a combo of basic machine vision which runs constantly on the acquiring device. What I want to provide on the client side is (a) the original image (b) the result of the basic processing (ie putting a crosshair or a box around a pre-calculated feature) and (c) let the user add their own ROIs to the image to help them perform some more simple analysis (ie use the 'line' roi tool to convert the 2d image into a 1d chart of pixel values, or histogram the region inside of a drawn box). But as you can imagine (a) and (b) are a totally different mode of operation than (c) -- you can't look at the histogram of each of 30 images at once, but you can stand a few feet back from the screen and scan through the images themselves pretty quickly. As I'm writing this down, it makes me wonder if I shouldn't just split those up entirely. Thus far I've been trying to make one exe to rule them all...but maybe it would be better to keep the (c) use case in labview but move (a) and (b) over to something faster like the C# engucv or the html canvas since they are read only and thus would take less effort.
  3. Yes, but as Mark pointed out I'm pretty sure you'd have to reverse engineer the interface :/ The funny part about this is that several of NI's competitors sell usb daq and oscope systems with drivers you can talk to with visa. As long as you dont pick one that needs a specific dll you should be able to get it to work on a crio.
  4. For the curious: Looks like external window tools are limited to 16 windows. I probably will throw anyone down the stairs who suggests that we do more than 16 on a given machine, but currently we are displaying way more than that one one machine. Raphael unless I missed something major only seems to allow drawing individual points or pulling an image from a URL rather than writing a full bitmap to the screen. ImageJ I haven't yet gotten to I found out this c# wrapper for OpenCV has a nice image display http://www.emgu.com/wiki/index.php/ImageBox and since its bundled with opencv it might be the best fit of all. I've been playing with it and it seems promising.
  5. Thanks, I'll take a look. I'm actually already shipping the image around as a 2D array over the network so it should be easy to try. Nope, but I'll take a look. Now you mention them, I thought I had read somewhere that they were sort of legacy so I didn't really learn more about it, but maybe it has some advantages. Raphael looks like a possibility, and we actually use imagej so that would be ideal but I got the impression is was more like a standalone editing program (when I say "we use" I mean "they use, and I tried it once and gave up"). I just looked though at the documentation and they seem to allow for some scripting in python or javascript so i may need to reevaluate. Thanks guys
  6. I've had some various issues with the imaq image display in labview, a key one being that it seems to (and according to applications engineers, does) run completely in the UI thread with CPU rendering...so if I wanted to say, display a whole bunch of camera images on one computer I find that I rail a cpu core and the entire UI slows down. I've resolved this in a few ways, but... I'm curious if anyone has ever found a third-party (and presumably non-labview) image display that is as nice as the labview one (or nicer!). Doing some searching it looks like everyone can display bitmaps and the like of course, and there are some examples out there for how to implement (as an example) zoom in a .net image display, but what I'm specifically looking for is something that includes the nice zoom, the pixel indicator, the drawn ROI selection types, etc. Anyway, I've been unsuccessful so I'm wondering if anyone else has seen anything like this or at the very least if anyone else has tried and also come up empty.
  7. I'm not totally clear on the concept but I wouldn't expect contiguous memory to be a problem on the desktop (and linux-rt targets) since they have virtual addresses. I know that this was a big nice thing about switching from vxworks to linux-rt, there was no longer any concept of a largest contiguous block. Are you sure there isn't a (very) temporary spike in memory usage and you're actually running out? I don't know if this works for imaq but under normal circumstances I'd suggest using desktop execution trace to monitor your program and see what is happening when you run out.
  8. Binning can be done at the camera in which case you should be able to see it somewhere in max. But if you do it there its binned when captured, so all processing would operate on the binned version. There are i believe noise advantages to binning on the camera. to just do this for display you'd use the imaq cast (convert to u8 or i8, be sure to bit shift by 4 or 8 depending on the source bit depth) and then bin using imaq resample with zero order sampling and x1=x0/2, y1=y0/2.
  9. I believe LabVIEW always uses the CPU for rendering everything, but if you're talking about windows-level compositing and all that then yes, I could see a really junk graphics card (like on a server-class machine) offloading that work to a CPU as well. Any recent integrated gpu should have enough power to do basic desktop stuff though, or so I would expect. I had a similar issue with a large number of quickly-updating images, never really came up with a solid solution. Binning (cuts resolution in half for display) and casting (cuts data from 16 bits/pixel to 8) the image helps but as mentioned that increases processing time. Is this a real-time target?
  10. 1/2. Just don't support that. I suppose someone could make a huge array, but if we're (I would assume) talking to something as powerful as or significantly less powerful than a cRIO, I wouldn't expect an array size greater than 1-2k elements, which is a grand total of 8-16 kB. Its a decent amount but not disturbing...plus this module is going to have run async anyway since UA is a req-resp protocol. Just pull the whole array and split it apart inside the module code. If you look at the modbus module it does something similar (at least I think we decided to group writes in that one, might be thinking of a different project). 3. I saw this a few weeks ago and thought "what an absurd thing who would ever need this"...it sounds like it might be a handy tool for you to test with http://node-opcua.github.io/ and of course since its just a protocol you could also use the c# lib I pointed out in the other thread: https://github.com/OPCFoundation/UA-.NET/tree/master/Dashboard
  11. You should definitely use a separate loop if continuous execution of the loop is important to you. You should probably also be measuring the loop rate if its really that important, to verify that your plc communication code can execute in time.
  12. Oh thats dumb. It looks like thats one of the few thats different from the formula node http://zone.ni.com/reference/en-XX/help/371361J-01/gmath/dif_pars_math_vis_formnode/ If you only need individual expressions and don't mind paying a little bit: http://sine.ni.com/nips/cds/view/p/lang/en/nid/21313
  13. I've never used it, but doesn't this do that? http://zone.ni.com/reference/en-XX/help/371361J-01/gmath/eval_formula_node/ It says it supports the same syntax as the formula node which seems to include min max sine cosine etc. Might be worth trying it.
  14. NI OPC==kepserver Yeah you can always build .net applications on linux using mono or dotnetcore or whatever we're calling it these days, but I don't think labview rt has integration yet so you'd be calling command line stuff.
  15. I'm still not sure I understand what the communication goal is. You don't want to lose any messages, but only want to send a message if the receiver has time to process it right then and there? Whats the purpose of such a scheme?
  16. Well, I wrote a whole library so I would never have to use modbus I/O servers ever again, so I'm not the person to ask on the second part As for datasockets themselves, they run in the UI thread and block. So for example someone resizing their window or opening the file menu and looking around will cause datasockets to block. You also can't run code which requires either the UI thread or root loop (I cant remember which) per this idea exchange request: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Make-it-possible-to-dispatch-VIs-dynamically-when-the-UI-thread/idi-p/1579534 Unfortunately, I can no longer find my kb on the subject, maybe it got lost during some of the web upgrades.
  17. Definitely don't put TCP code in a timed loop, regardless of whether or not you saw a difference here. Just don't. I know there have been actual bugs/crashes in the past, and my current understanding is that the underlying calls to perform the TCP transfer are happening at a much lower priority than the thing shoving data into it (your timed loop). If you don't care about delivery, why not use UDP? STM is polymorphic and will happily accept a UDP socket instead of a TCP one.
  18. Its worth pointing out that an OPC server is just a plugin architecture with a bunch of drivers so i don't think you can reasonably quote performance numbers. First off each driver works differently, so for example a modbus driver would poll at a configured rate while the Mitsubishi driver might be able to use events of some kind. Kepserver at least has a threading model such that, for example, several modbus items in the same group will poll in series. I believe the client-side updates work similarly. So the answer is I think closest to "some stuff will go fast, some stuff will not". This also answers your wireless network question -- it depends on the driver. I believe the shared variable engine always acts as an OPC device, regardless of if you have DSC installed or not. You'd need kepserver to link to it but its there. (http://digital.ni.com/public.nsf/allkb/CC9CDD577F041786862572120061EB5A) Other options I've noticed lately: UA now has a .net driver if you want to avoid the scary DSC one: http://opcfoundation.github.io/UA-.NET/ Eclipse foundation has been slowly funding a scada system which is getting slowly better and has decent support now: http://wiki.eclipse.org/EclipseSCADA/Documentation/SupportedProtocols (not a server itself) I've sort of been watching these guys for a few years, they are more of a full solution (alarming, logging, everything) but it seems like they're doing good work towards modernizing some of this stuff, and they also work with standard OPC client stuff: https://inductiveautomation.com/scada-software/scada-modules (not a server itself) Datasocket is never a good idea. Just say no
  19. Fair enough. Again the field is too broad to answer generally, but I can share two varieties I've worked on, both in the general area of SCADA. Project A is more of a local control application, where there was a decent amount of logic on the distributed systems. www.ni.com/dcaf is a good example of the general design used -- all data goes into a central storage and the control loop operates continuously off of that storage without ever stopping. Events were implemented as tag changes. Data is transferred between device and HMI by copying the full tag table between systems, so there is no concern about missing an event. This could be implemented using the CVT client comm library if you're using the CVT, but in concept its very simple -- make a TCP or UDP socket, send data on that socket at a fixed periodic rate. Project B was more of a daq application with a lot of attached devices and instruments and a tiny, tiny amount of control. As such, an event-oriented approach made sense. Everything used a QMH, data went over the network as separate packets in a TCP stream and were processed as individual updates (although for some HMIs this was copied into a global tag table like the CVT). Here, a TCP stream was used for data updates, while messages were sent using a request-response protocol on top of TCP (think similar to http). The advantage to this is that you always get a response immediately. If you say "move to here" and it knows it can't, you'll find out immediately rather than having to (for example) look for a fault bit to be set. For every system I've worked on I've found syslog to be handy -- this could be either the UDP-based library provided by NI or just making a simple string file logger yourself. Debugging is always a challenge, so a standard way to report human-readable information (debug messages, status updates, errors) is critical. If you make your own, timestamping is important. Some people like to have a central error handler, but I've never seen this work well. Reporting, sure, handling, not as much. My general pattern is 1-check for expected errors, 2-if the error is not expected, reset the process from scratch and report. For example if you're talking to a serial instrument and you get an error, tell anyone who might care, close the visa session, reset, and try again. Don't try anything fancy to fix it.
  20. If this is as large as your post makes it sound, this is not an question for an internet forum. It sounds like you need assistance from an alliance partner or NI sales support group. Your field rep can get you in contact, or you can find alliance partners on ni.com/alliance. As a starting point, I'd suggest skimming this: https://www.ni.com/compactriodevguide/ .
  21. It has to be in memory to make sure the code can compile. Look back at his post: "By changing the subVI to Load and retain on first call, it will no longer be reserved for running when you run the top-level VI, and as a result, will not ever hang your app if you accidentally leave it open before running." Its in memory, but its state doesn't change from edit mode until the function runs. Personally, I think modals are evil always and so I use floating/normal if possible. It seems like any time someone uses a modal dialog its a situation where I was wanting to copy/paste from another part of the same application and now I can't because the modal takes over. Whatever you do, for the love of god please don't do what VIPM does where it endlessly forces itself in front of every other window every 3 seconds just to piss you off. I know the exe doesn't care or have emotions, but every time I install a package I ctrl+alt+delete the damn thing just to spite it.
  22. IIS is really annoying to set up as a reverse proxy, I found, so if you're using it to access devices on a closed network you might have issues. Apache and Nginx are significantly simpler in that respect. I'd also suggest looking at caddyserver (https://caddyserver.com/) which is a golang based web server that has one file and one binary and thats it, with a pretty respectable amount of built-in plugins...nowhere approaching apache, but good for something that came out two years ago. Caddy is not so good as a reverse proxy as (so far as I can tell) it doesn't have a rewriter plugin, meaning there is no way to fix html or js results which contain full URLs (which pushes that burden to the back-end you're proxying to)
  23. If your code is using a large amount of memory ~2 GB or greater then yes you might benefit from 64 bit. I'm guessing at the processor level there are also some differences in performance -- for example I believe x86_64 has more processor registers, making some calculations quicker, but on the other hand instructions are larger which uses more memory just for the code itself. Long story short the only way to really answer that is "try it" and the likely answer is not more than a few percent faster or slower. The better starting point would be to use the profile http://digital.ni.com/public.nsf/allkb/9515BF080191A32086256D670069AB68 and DETT (if you already own it, http://sine.ni.com/nips/cds/view/p/lang/en/nid/209044) tools to evaluate your performance. The profile tool is kind of weird to use and understand but its handy. DETT gives you a trace which can help you detect things like reference leaks (you forget to destroy your imaq image ref, for example) or large memory allocations (building a huge array inside of a loop). And of course there is no substitute for breaking down your code into pieces and testing the parts separately.
  24. well hes probably made a generic tcp function which takes his local cluster and transmits it like an RPC style thing. If you're saying the german char is in the variant I'd suggest adjusting the code on the sending side to use this: http://digital.ni.com/public.nsf/allkb/45E2D7BE36CE3E8B86257CCF0074D89B keep in mind the type is a variant, so you have to be careful about how the receiver gets it. Note the code in the KB -- if you use this function to flatten an entire cluster, then the receiver doesn't get a string representing the flattened cluster, it gets a string representing a flattened variant, which represents the cluster. So if you were to use this I'd suggest using it *just* on the variant part of the data. The complexity of doing so depends on what else is in the cluster and how labview stores data in memory ( http://zone.ni.com/reference/en-XX/help/371361N-01/lvconcepts/how_labview_stores_data_in_memory). If the german char is in the string, well the flattened representation of a string has probably never changed -- its 4 bytes for len followed by the data. So in this case I'd guess its more likely to be an issue with how the ue or oe are represented in whatever character set is in use....but this is about the extent of my understanding of character sets in labview. I'd suggest doing what inf{} said above, maybe trying to reproduce a very simple case, or potentially using the unicode tools ( https://decibel.ni.com/content/docs/DOC-10153) to convert characters from window->utf8 and back again.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.