Jump to content

ensegre

Members
  • Posts

    550
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by ensegre

  1. Crosspost on the dark side. (and on the track of "where do I connect the cable so that it is integrated with Labview"). Forget.
  2. Certainly not. You see, it is not that the difference with these anemometers and yours consists in the length of the cable and in the firmware version. The general principle of the instrument may be similar, but the protocols they may have chosen to deliver the data will certainly differ. You have to look up your documentation and understand which kind of commands and data your instrument talks. And you have to get familiar enough with labview to be able to write a program for doing what you want. Based on my previous experience I would guess that your instrument may accept some commands sent on the serial port, and deliver, continuously or on demand, still on the serial line, velocity datapoints. Your program will have to receive this data, parse it and interpret it. The format chosen might be ASCII or binary, this I don't know. The basic building blocks of your program may include VISA read, VISA write and scan from string. Beyond that the business logic is all yours. A further issue you might have, is how to synchronize the anemometer with the readout of the other instruments. As I don't know details, I can't recommend. Basically, every device would run on its own clock, and replies to the computer with its own delays and latency, unless there is a way to ensure the sync - a trigger, for instance.
  3. I have no experience with this one, but I do with Gill sonic anemometers, and I don't think the story must be very different. These are probes which provide a continuous stream of 3d velocity data, usually on a serial port, using some simple communication protocol of their own but documented. Normally all that is required is setting the work range and sampling frequency, and then just log the incoming data, for archival/online processing/whatever (which is application dependent). https://www.nortekgroup.com/products/vectrino?p=en/products/velocimeters/vectrino (click on Technical Specification/Data Communication) gives some hint about being a) serial b) supported by some software or SDK. Beyond this, homework. You know, handling serial communication, sending command words, parsing incoming strings, logging data.
  4. PS: you may have to cast number arrays to U32 to avoid wraparound of the sum, if you have more than 32767 pixels, sorry I didn't pay attention to that.
  5. In general I am however interested in hearing about experiences and opinions about LV vs. X for building SCADAs. A quite broad question I understand, and much dependent on plant size and boundary requirements, but still. In my case some of the arguments which drove me once more to LV were: heterogeneity of the hardware to be controlled, need to interface with other systems, perception of a higher freedom and power in concocting the HMI, academic rather than industrial work culture, presumption of insurmountable learning curves and additional license costs.
  6. I'm throwing in my all the way incomplete point of view, because my current project is a SCADA too, and I decided to go the labview way. However, I'm still asking myself about the rationale of my decision, and about the option of integrating PLCs at a later stage. At the design phase I looked at some docs of a few PLC vendors and their control software, and was scared off by their license tems (LV we have academic), by the learning curve and by the apparent (to me) clunkiness. Add to that that I presume that the bonanza I can get from the LV community largely surpasses the support, even paid, which I could get from a solid SCADA vendor. It seems you are still deciding whether you need PLCs or not. If you're excluding NI (realtime standalone?) hardware, does it mean that you can afford the increased instability risk of running your business logic on a desktop computer? Is safety among your requirements? (No connection with the particular vendor), I ended up purchasing a number of these Advantech ADAM-5000 modules, which seems a decent compromise for my needs. On one side, they are ruggetized and modular remote IOs, which I can easily read as modbus registers. On another, they seem to have a minimal capacity for an embedded logic (e.g. you can define logic functions of three inputs which are routed to another register of the same or of another module), thus potentially overlapping with minimal PLC capabilities. I'm currently testing them. Has anyone else used them and would share opinions?
  7. Getting no answer, I crossposted the dark side and got a solution there.
  8. No idea about the above. If it marginally helps, I have this snippet which descends a directory tree and does all .vi, .ctl, .lvclass and .lvlib. The code is a basic elaboration of what proposed earlier in the thread. SeparateCompiledCodeTree.vi
  9. I wonder if someone ran into this and has a good suggestion about. I have a DSC project, in which it looks just right to organize hierarchically my shared variables. Like, e.g. Now this is easy to do programmatically, see the attached project. The problem arises when building an application. It would look as if the plain way to do it, would be to create a build specification which includes the additional libraries, and select their deployment in the "Shared Variable Deployment" tab. However, this doesn't seem to work for nested libraries as in my example. The only possibility seems to add in "Always included" each of the contained libraries. But by doing like this the hierarchy is flattened. If I include like this, then the variables within the container library are not deployed at runtime. In my example project: open the project in the IDE, run DeployAllSharedVariables.vi, then CheckDeployed, and see the result (all four variables found with their nested paths) build and run DeployFlatLibraries and see the result: four variables, but flattened paths build and run DeployHierarchicalLibraries: only the two variables in the unnested SimpleVariableLibrary are there. I've searched a bit, and only came up with this document, which (for >2009) says "just check the checkbox". Nor the help page says much either. I wonder if I can do what I'd like only compiling separately the libraries, and loading them programmatically afterwards, both in the IDE and in the exe. Which probably is sane, but inconvenient for the first attempts. TestDeploy.zip
  10. I'm trying to use LVTM to debug an issue I have with, probably, stale references to asynchronous dynamically launched VIs. To stress test the problem, I open LVTM, open my project, launch what I have to launch, and then I uncleanly choose File/Close all (this project) while the whole contraption is running. LVTM shows me then this attached unreactive almost empty tree, with an entry surviving from the aborted project. If I double-click it, I get the error dialog. How can I help to debug LTVM (and my issue too?)
  11. If I put such a threshold I see no memory events at all. Is you trace dependent on some particular data already present in your SV? Otherwise, it may be that the issue is solved in LV2017. Edit: Ok, with more data in the string I see this (a different sequence of memory operations in fact): But does this indicate a leak? I note the free of 4 bytes, which maybe you squelched with the threshold.
  12. I don't know if I'm looking at the same as you, and I haven't investigated either, but I don't see leaks. LV17 32bit Win (where else do you have SV?) 10. I suppose allocation and deallocation sizes might depend on the variable content too, don't they? My trace has many more events, are you somehow filtering them? I don't fully understand your throttling of the second loop based on timeout of an occurrence, but there you know better, maybe it has to do with your architecture at large. Untitled Project 1.lvproj.det
  13. thx. And agree with the very limited value of piecewise parsing just for checking. Btw, I see, you don't keep in sync github, forum and CR entry page. Never mind... Am I the only freak of this toolbox?
  14. This is what I'd propose (cluster output). Backsaved for LV2015. mupGetLastError.vi Construct.vi Get_Last_Error.vi Now it occurs to me that only the first parsing error in a (multi) expression is reported; I could think at cases where I would like to see all errors in long expressions at once (when not ambiguous), but I think this is not contemplated in muparser. For example, I don't think you can merge different expressions parsed separately into a single muparser instance.
  15. wow that was fast. I'll evaluate tomorrow how it fits in my current project. I agree that syntax errors can result only where expressions are defined and hence there may little reason for an a posteriori VI. Though, muparser allows decoupling it so it may be an argument in favor of it, but no need of overdoing. Something I could suggest is to group all the outputs of GetLastError in a cluster for compactness. I was also considering to include these as attributes of the muExpr class (ref), but I realized that your choice was to generate an empty class in case of error during construct, so that wouldn't apply.
  16. Still about: No big deal, they are all very simple. The full message from mupGetErrorMsg() is already more informative, beyond that I can't imagine. See my go at it: While designing an interface which should highlight syntax errors in typed formulas, I realize that I miss a VI returning directly the strings of mupGetExpr() and mupGetErrorToken() together with the number mupGetErrorPos() [when applicable], instead of tediously parsing them from the full error message. Do you envision adding it to the toolbox? I may have a go at getting them from the encapsulated form, it later.
  17. More out of curiosity than of hope: has anybody any idea why SVs are almost unsupported on linux? By almost I mean that controls and indicators cannot be bound to shared variables, and that shared variables cannot be programmatically created and looked up. I know that SVs hosted on windows can be accessed in linux LV using datasocket nodes, but that is all it gets. And it has been said that datasocket is despicable. What are the missing pieces that make SV windows-only? I didn't find much in the canonical places, so I posted a dumb zero-kudos attracting idea.
  18. Dataflow.The small loop runs only after the big acquisition loop ends. You probably have no choice but to communicate the avi file reference and the writing status from one to the other via local variables. Or a channel wire perhaps.
  19. Basic LV programming question. One way is with event frames. You should only open the AVI file when the boolean changes from false to true and close it when it reverts to false.
  20. http://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_avi2_create/ http://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_avi2_get_codec_names/
  21. I don't see a producer-consumer in your VI. That is e.g. two loops, enqueueing and dequeueing image references, or something equivalent. You create an AVI file at start because your program does so, you might want to do it rather only after the button is pressed. And I would handle that with an event structure and a shift register propagating the file reference from one iteration to the next. Maybe you'd want to check some LV learning resource first?
  22. A producer-consumer architecture, with a queue of images to be saved by the consumer loop, is the first thing you should really try. Alternatively and asynchronously grabbing one image from each loop for either viewing or saving, like you do, wont bring you far. Also, rendering the preview image might be resource intensive and compete with saving, you may want to display only one every N images. Finally, AVI could involve compression codecs, which can also be computationally demanding; to maintain a given frame rate you might have to stream uncompressed images, or to choose a less demanding codec, to reduce the image size.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.