Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Please please please start using source control other than a monthly backup. Go to https://www.sourcetreeapp.com/, install it, and create a new repo "add" a working copy option. Everything is embedded, no separate server is necessary, you're just comitting changes locally. Its 5-10 minutes to set up, then all you have to remember to do is to commit occasionally -- I do it like every hour, some people prefer when they finish a feature, but in any case its easy and simple and would solve this problem in the future. OK, so now to try to fix the issue. My first thought is to try to enable debugging in the exe, and then connect your labview debugger to it. This may not work since its broken from the start, but is worth a try: http://zone.ni.com/reference/en-XX/help/371361N-01/lvhowto/debug_apps_dlls/ http://digital.ni.com/public.nsf/allkb/8DA679805915DE40862572D5007B2F70 If that fails, on the "additional exclusions" tab of a build spec there is a series of checkboxes. These would be better labelled as "toggle randomly to fix build" rather than the existing names. As mentioned, please toggle them randomly and see if it fixes the build. On the advanced tab, there is an option for using 8.x file layout. Same basic idea.
  2. Not specifically, but it doesn't surprise me. Classes are buggy. Flattening variants is buggy. Flattening variants inside of a class...probably buggy I don't have a good answer for how to fix it either, except to say that if I know a class needs to be saved I'll always make a to/from string (or to/from config cluster) and save stuff more manually. I use dr's json lib these days.
  3. If you have the vision development module, there is a function for the built-in options: C:\Program Files (x86)\National Instruments\LabVIEW 2015\vi.lib\vision\Display.llb\IMAQ GetPalette Useful if someone wants to replicate the behavior of the imaq image display.
  4. Well you'd basically have to write a program in .net which takes, for example, TCP messages, and plops them on an AMQP queue. It would be doable, but annoying. A similar but slightly preferable mechanism would be to write a windows host program that your cRIOs talk to using plain TCP. This windows host program could do the interfacing to AMQP and potentially run on the same machine as amqp. A different route entirely would be to use the rabbitmq adapter for the MQTT protocol. I've never used the adapter, but the protocol is very simple and there are plenty of native labview implementations available (real ones, with all the code intact, like: https://github.com/DAQIO/LVMQTT) I would say your best bet would be to learn how to call DLLs in labview and directly call one of the C APIs for rabbit. Eventually, as I understand it this toolset will use AMQP behind the scenes: http://www.ni.com/documentation/en/systemlink/ear/manual/skyline-data-services/ But at present its on early access release
  5. Wow that is really nice looking and exactly what I'm looking for. Doing it all custom sounded like a big pain but you've made it look a lot easier than I thought The other advantage this provides is I had a request to be able to visualize the dTs for selected channels. Originally I had a separate tab where users would just have to select two channels, but if drawing it manually it probably wouldn't be too hard to overlay a dT. Thanks a lot! I don't totally comprehend -- I have all the data I need and I can easily generate data for a digital waveform or xy chart, the challenge is plotting it in a nice way. Thoric's sample looks just like what I wanted to accomplish.
  6. Hey all, I'm looking to make a user interface for digital triggers and I want to display a chart where each trigger is displayed vs the others on the same time scale. The problem is that I have a potentially very long time scale, so to get a reasonable resolution (say a 10 ns trigger displayed against a 100 ms period) I need a lot of samples per channel. I may have N channels, and that difference in period may vary wildly. However each channel will only likely have something on the order of 2-100 transitions, so I'd like to store the information in this sparse form (even my initial 'sparse' form ran out of memory the first time, since I included the main oscillator :/) My initial solution would be to just use the digital graph (because thats what I want it to look like anyway) but that appears to require a point for every value, even those which don't change. An XY graph plots the signals on top of each other, and thats tough to visualize. I could offset each signal by 1.5*ch#, but then I end up with a super scrunched graph (I'd like to be able to scroll up and down). I looked at the mixed signal plot because I have XY data and I have groups of triggers which belong to the same subsystem...but there appears to be no way to dynamically add or remove groups. So, does anyone have any thoughts on how to accomplish this?
  7. Correction: you cant use .net *from labview* on a cRIO. You can always install mono, maybe even the .net standard or core or whatever they call it. Admittedly this doesnt help marcos, but... Someone made a native amqp client, which is here: https://github.com/tweeto/AMQP-Client Never used it myself, but they say they tested against rabbit. Edit: it looks like the user did not actually include their code...just the lvclass files :/ I think there used to be a native amqp implementation on the community but I can't find it since the switch over. I dont know if the above is the same one. To be clear, zeromq has nothing to do with amqp except that the authors of zeromq hated amqp enough to write zeromq. I used to have a long article on the ni.com community about how to cross compile zeromq for linux, but I can't find it since they moved to the new community format (even though its my own document). In any case, you would need to cross compile and then the zeromq library should in theory work.
  8. If you grab the latest code before NI week its been pretty stable for me in lv15, although the 2017 VIs seem to have prettier icons in some cases I didn't dive too deep, but it looked like the VIMs just wrapped the variant functionality, so it probably wouldn't be too much of a challenge to remove the vims and just use one layer lower -- unless I missed something important. We'll probably eventually bump up to 2017 but its such a chore to update all the devices and json is at the lowest dependency layer.
  9. My vipm version is 2017.0.0.2007 feb 03 2017
  10. I tried to install this new version into 2017 and the library seems to have been improperly disconnected. Lots of those "library does not claim to own this vi" messages. I tried reinstalling, same result. Edit: tried converting to zip and just extracting to VI lib, worked perfectly. Very weird.
  11. Not totally sure what you're asking, but it sounds like it might be a good idea to make sure the format of an ini file is clear? Its: [section] key1=value1 key2=value2 In your case, the key names are known, the section names may not be. I'd suggest using get section names, and then calling read key 4 times with the keys you listed, for each section name. Stepping back a bit, you are reinventing the wheel. If you want to just take a labview type and save/load it from an INI file, use moore good ideas' MGI read/write anything library which you can download from vi package manager. This uses an ini-like format to store clusters on disk. You could do the same with a variety of other formats (flatten/unflatten to json, or xml) without having to manually select out each value you care about. Stepping back even further, depending on your needs you could simply configure all of your daqmx tasks in max and never have to worry about saving them to a file, but this obviously depends on your system.
  12. Oh I meant "in theory, but really", not the theoretical max. Theres the bus max physical, then theres what is achievable for most people, then there is what labview fpga achieves.
  13. Rust seems to focus on moving references around and passing ownership between chunks of code, which isn't far off from dataflow. The thing rust can do that labview can't (through dataflow) is retain a read-only reference to a mutable object elsewhere. I believe the recent changes to the IPE (http://zone.ni.com/reference/en-XX/help/371361P-01/glang/inplace_datareference/ concurrent read only access) allow this though, but you're still stuck with the horror of the IPE and you have to do extra work to make it read only.
  14. Its possible to overflow the buffer on the fpga just from fpga use, but its hard. The zynq chips can DMA something like 300 MB/s so I could imagine overflowing that only if you have images you are processing. The older targets have a PCI bus which I think should in theory support something like 80 MB/s of data but I vaguely remember getting more like 40. The highest speed analog module (except the new store and forward scope) generates 2 MB/s/ch, so a full chassis would be 8*4*2 = 64 MB/s ch. So basically if you have old hardware and abuse it, you can hit the limit.
  15. well locally I think the only real option is sqlite. For a central server, unless you have strenuous requirements it probably doesn't matter. If you need to write to a database from a vxworks crio, then mysql/maria is right since they have a simple raw tcp connector. Postgres seems to be the current favorite. For time series data I dont have an answer, I looked around as well and wrote what I found here: The one I'm most excited about is https://www.timescale.com/ which is built on top of postgres, but another interesting option is mariadb's columnstore format. The disclaimer to this is that I have TBs of data to handle, and if you don't I'd just use postgres or mysql/maria and not worry.
  16. Interesting, the same holds even if you do the following: Pull out the string and read array size pull out string and use split string (at char 0xFF) but only read offset Inside of top (writing loop) in-place manipulate data of string (convert to byte array, replace element 0, byte array to string, rebundle) To force labview to do an allocation, I had to go so far as to pull the data out of the DVR and manipulate it and use the manipulated data (I used the same byte array replace index + split string) . They really did a solid job with the optimization here. Yes it actually did. And in typical labview fashion there are two copies (one for the data after the for loop, one to bundle it). You can get rid of the unnecessary copy through this embarrassing scheme: As impressive as the DVR optimization is, this is far more common and far more sad
  17. Given your other post is it safe to assume this is an xnode? In that case, maybe you can use this https://lavag.org/topic/19781-xnode-owning-diagram/ to get a ref to the VI you're dropping on and then use that vi ref to get the application context and from there go crazy with property nodes across application contexts?
  18. Well the nice thing about his is that you dont have to copy the entire data structure, just the portions you care about. Note that he unbundles inside the IPE structure. For known names that you want to support (like always unbundle the thing called "waveform") the VIM would work, but its not arbitrary like the xnode could be.
  19. Interesting, you have users who bother to read your documentation? Most documentation i write is for the purpose of checking a box on someones checklist. And I think this is the problem. Its hard to write useful documentation for something you're actively working on. The right answer is another programmer, but since everyone hates making documentation you can't do that. Unless you have a fairly big organization behind you, I imagine its difficult to get a tech writer and my experience with tech writers has been that they are not tech enough anyway. So you end up writing something that only makes sense to you. To answer the original question, most projects I work on or either short or never end. It does help to break things down into milestones and deliverables and all that...and the reality is that there are some weeks where I don't do a ton because i just feel blegh about the project, so I work on something else or pick off old minor issues to feel like I'm accomplishing something.
  20. If you havent seen, this is a slightly old but still useful resource: http://www.ni.com/compactriodevguide/ For example it would inform you that there is no reason to copy data from the DMA directly into another fifo, because the real-time side of the DMA can be as large as you need it to be. As for your transfer mechanism, prefixing your data rather than null terminating is the better plan, as C has been teaching us over and over for 40 years. In either case if you lose any data all future DMA data is invalidated, so you have to set it up so that the FPGA never loses a packet of data. You can do this by previewing the DMA fifo size to make sure its big enough or using a -1 timeout. The length-prefix will also give you nominally faster latency as you know exactly how much data you're looking for.
  21. It looks like most of those references are identically named to the cluster in which they are stored. You can pretty easily use the data type parsing VIs to take each cluster, get all the element names out, then get all front panel control references and just match the strings. I'm not sure how you'd get the generic control reference back to the specific types in the cluster but I bet you could type cast or use ->tovariant->variant-to-data-> to accomplish it.
  22. Presumably there is an inverse of this function: https://forums.ni.com/t5/LabVIEW/Excel-ActiveX-password-encryption/td-p/3599665 I can't speak to the openG library specifically, but a typical pattern for labview front panel items is to indicate the default in parenthesis (hence the "error in (no error)" you see on most VIs). I would say in general that you should only follow this pattern if the default is unexpected, so a default of no password would not require an indication. For the AES, you could pad the result although I dont know how that effects the strength https://en.wikipedia.org/wiki/Padding_(cryptography)
  23. Thats fine, I just don't see a reason for it (continue if true) I think the feedback vs while loop performance changes based on the surrounding code, and it fluctuates from version to version. For example my test in 2012 for a specific use showed feedback nodes as faster, The difference is so minor as to be unimportant.
  24. it also has a false and continue if true for the while loop, also odd. I would guess its based on old code written in an older style (i think openg or some similar package has that same function). feedback nodes are great.
  25. Depending on what you're wanting to do with the image in imaq, it could be easier to compile a little c# or vb module using opencv/http://www.emgu.com/wiki/index.php/Main_Page Its also not crazy difficult to make a 10 line class which take the ptr reference in c# and calls the imaq dll directly or calls the tdms api directly (http://digital.ni.com/public.nsf/allkb/A3663DE39D6A2C5A86257204005C11CA) or if you have measurement studio (http://www.ni.com/white-paper/8032/en/) Getting one managed to allow a language without pointers to directly manipulate its memory in order to generate a image structure in a c++ dll seems more challenging than its worth
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.