Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Id suggest looking at the section "State Machine Design Pattern" here: The details will vary, but it sounds like this is what you want. This tool may help, again depending on the details: https://forums.ni.com/t5/Reference-Design-Content/LabVIEW-State-Diagram-Toolkit/ta-p/3606081 Its basically lets you model the state machine and it will script out the (boilerplate parts of the) code diagram for you.
  2. I'd suggest looking at/borrowing from https://github.com/opengds/OpenGDS I believe theres a good amount of class scripting, including (I think) scripting from a template, so it probably has what you need. May have to dig a bit though.
  3. I've never done it that way, for shared dependencies I've always made packages and installed them into vi.lib, which eliminates the problem. I don't think there is an easy fix. The fix I can think of is that there are several steps to the sample project: a dialog is displayed, then it copies the sample project into a new location, then it runs any post-copy scripting code. My suggestion would be to ask your users, in the dialog, where their 'root' project directory is. You can then through either the default value vi, or that same VI loaded up in a more appropriate application context (you may need the super secret ini key turned on to find the right one), get the ancestor classes loaded in from the appropriate path. Once they are loaded, and you reach the copy part of the script, labview should accept that they are the right ones (it might fuss that dependencies were found in the wrong location, but it should work). Another route would be to just make a tools menu item which replicates the sample project script, but with your particular variations like asking the user for their main directory -- that would be my workaround if labview does weird stuff with application contexts when its copying the project over. I don't know the exact procedure.
  4. Are you sure its 8 data bits then? ASCII is 7 bits. The visa icon just indicates that it will wait, because it doesn't have a timeout input as I recall. The actual serial chip is running constantly. The KB is suggesting you give the system a few seconds to change the UART over to the proper settings before doing a read. However if the system is already running when you launch this labview application that will not help, as data is already coming in.
  5. I saw that, it looked like it provided a mechanism to get the password back (where applicable). There are of course other login types these days like smart cards but thats kind of cheating in this case since Viper was specifically talking about passwords.
  6. I was curious about this -- I wasn't able to find any example of code which offloads all the password handling to windows. You can open the standard windows login dialog, but you still get the password back. From a more realistic perspective, this is to be expected. When you use single-sign on from a browser, chrome or firefox has access to your password. When you log into your bank, google has your information. If you use a bad browser plugin, someone in the ukraine might have your information.
  7. For the framing errors I'd suggest adding in the code from A here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L38SAE VISA is probably caching errors in the background, so you want to basically clear out any old data and clear any framing errors during initialization. Once thats done, I would expect VISA to work equivalently to your terminal program. In fact, you might see if this works (as a starting point) -- that will tell you whether or not visa is the problem: https://forums.ni.com/t5/Washington-Community-Group/LabVIEW-Hyperterminal/gpm-p/3511381 Once you have your data coming in you can use http://zone.ni.com/reference/en-XX/help/371361M-01/glang/search_split_string/ to look for the key sequence. As for getting that boolean to another loop I'd normally say queues but the new channel (http://www.ni.com/white-paper/53423/en/) stuff may be a better fit to start with.
  8. your understanding seems correct to me, 1% error from a daq card is too damn high
  9. You could trigger manually as shown here: http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/conversion_timing/ with a big delay between samples, and see if that improves at all. If you are using scan mode, you can change the min time setting here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kGRtSAM also please verify you have the module grounded: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019KrCSAU and also if you are using scan mode: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LBlSAM And you should also verify that you have the module configured to single ended or differential as appropriate for your source.
  10. NIPM just uses the opkg format, like their linux distribution. The limitation there is that its geared towards system installs, which is why you have to point it at the specific labview version (eg in NIPM they might have NI Scope support for labview 2015, NI scope support for labview 2016, etc..). GPM is as you said geared towards a more sane project-oriented development workflow vs a global code repo.
  11. Neither of those seem to have a license associated with them, while candidus' has a defined license Also, not the same, but a neat nearby concept: http://sine.ni.com/nips/cds/view/p/lang/en/nid/216348
  12. I don't understand a lot of what you're trying to do, but the serial framing error isn't likely a visa problem. Are you sure you're configuring the port correctly with the right baud, stop bits, etc? If so, look here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L38SAE As for why its stopping after you enter the loop when you highlight execution, you're telling the loop to wait for 15 minutes (1,000,000 ms) between reads. That is probably not what you want. In fact, you don't need to have a wait in that at all, because you are already waiting on the VISA read. If you know your microscope will send a message every 15 seconds, set your visa timeout to a reasonable number (probably 20,000 ms rather than your current 100,000) and set the term char and thats it.
  13. You could ask that about MQTT in general. It seems to be a protocol that just sort of slid into its current position as 'the iot protocol' almost by accident. Specifically vs DDS, MQTT is a lot simpler and easier for a normal person to understand ("ok now send a structure with a command name and a payload string" should sound familiar to everyone, I think), and it works over TCP, making it potentially more network-friendly than the UDP-based DDS. For me personally, I feel like some of the marketing is hard to believe -- when people start making claims about 'deterministic' performance networking in a magical software package with a bunch of features, without laying out any numbers or proof or anything it makes me concerned, as one example. It seems highly geared towards marketing towards executives vs marketing towards engineers, to put it another way. I'm also not the biggest fan of how it was pushed by NI
  14. I've always done it the terrible way which is why gpm is exciting to me -- use vipm to install your reuse code in vi.lib and then completely uninstall it to make changes, then rebuild. You can sometimes keep it simple by writing reuse code in version N while you write your applications in N+M, but that isn't always feasible if NI releases something worth upgrading for (vims). I've tried subtrees in source control, but I seem to always reach a diamond pattern which breaks the subtree. I don't see maintenance as being particularly hard with the gpm style. Also, while I'm not a web developer I've certainly downloaded and tried out web applications, and NPM is pretty amazing for providing me, the clueless end user, with a working system. Its kind of like a VM in that way -- it grabs exactly the versions the developers used and it doesn't touch anything else in my system. I'm curious if anyone has thoughts on downsides to the gpm style besides the palette issue...because I think thats solvable (if annoying) For a distribution situation it matters a bit less. Like, I'm sure the developer has to deal with managing consumption of their own packages and development of those packages, but for everyone else we just download the latest version and install.
  15. Tell you what, we can chat about this again when Jenkins puts out its first plugin specifically to work with GPM or VIPM. Until that point, you need a CLI or, even better, what people used to do -- a batch file which wraps labview You can put some steps if you want into your install process that tell your user to "open services.msc, then look for the service 'blah co's sweet blah blah app', right click, select stop, and wait for it to finish, then return to this wizard". For me, I'd prefer to put in my package a system exec call "sc stop mysweetapp" and be done with it. As for making a windows service, its not great, but its not super hacky and I now have a dozen or so services running using this pattern: In your application: Open up a self-reference so closing the panel doesnt kill the app add in a panel close event -- NOT the panel close? but panel close itself When the panel close event is generated, do your safe shutdown and then close the self-reference and exit labview Use NSSM to create a service wrapping your application On the shutdown page only check WM_CLOSE and Terminate process with a reasonable timeout specified. WM_CLOSE will trigger the panel close My solution to the NIPM limitation was to bundle NSSM with each package.
  16. I thought I saw a UI in the video, but the main point is that this is intended for people with automated processes...so a CLI is the correct answer. Yeah I certainly had hope that NIPM would replace VIPM. I've occasionally heard complaints about NI stomping on products alliance partners have developed and I'm sure that does suck, but it amazes me that something so fundamental to NI's software platform is the place where they decide to draw the line. Its technically possible to build a source distro and then convert that into an NIPM package (basically what VIPM does as I understand it) but palettes are impossible. So far as I can tell VIPM's greatest gift is not the package part, but the actually decent palette editor (something else I'd expect to be core to NI's software platform, as the palette is one of LabVIEW's killer features for learning to write code). I've also provided that feedback on the post nipm install calls to the systemlink people. They want to do automated deploy of packages, but there is no obvious way to stop/start windows services? I don't particularly care if its a VI call, but I do think at least having a command line call is needed.
  17. A semi-common way of doing this is to make github pages with lists -- these github pages are themselves git repos and thus can be modified by anyone, just like a wiki. example: https://github.com/rust-unofficial/awesome-rust https://github.com/bulutyazilim/awesome-datascience https://github.com/node-opcua/awesome-iot meta: https://github.com/sindresorhus/awesome Obviously a lot of these are poorly organized, including the data science one, but...
  18. MIT/BSD is the traditional answer but the real one is "you should license your code". That means at minimum having the full legal text (not just writing "BSD" in the VIPM field) of the license in your repository/package/distribution, with a proper name (not [enter name here] as I've seen sometimes), and ideally adding it to each separate file. Github provides https://choosealicense.com/ as a helpful guide OMFG this is great. to MGI. I've often thought about this, specifically the per-project install but the amount of effort I would have to personally invest was too high. I may very well be suuuper disappointed after spending time looking at the details, but its exciting just that someone is trying --- The other big unique labview pain point is versioning. As long as NI insists that a 2018 VI is different than a 2017 VI, theres going to be the potential for problems.
  19. section 2.2 here may apply: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P6vdSAC
  20. I'd stick to exactly what it says in the exam guide. Specific to your question #2 I've bolded an important item. As I understand it, there is at least a VI analyzer pre-check for the exam followed by two human graders. I've underlined items below that I think a computer can easily check. Its impossible to know how the graders work, but I would guess that they use the VI analyzer to point them at obvious errors, so if you don't get hit by the analyzer you might not get hit by the human grader either. Note that, just as in real life, you can pass with zero documentation. (the above is both accurate, and a joke, because documentation points are super easy to get) **yes, its literally cut off like this in the guide for the documentation section Page 12 mentions icons as well, which I personally interpret as "is your code in a library" and "does your library have a non-default icon" but I may be wrong.
  21. You would not build a broker/server implementation in labview, you would use an off the shelf solution like mosquitto. I'm assuming you mean a client implementation. For clients, the protocol is deliberately pretty simple, for low power devices, so I wouldn't feel uncomfortable using one of the labview implementations out there after testing it a little. If you don't feel comfortable, the real answer is to use one of mature C libraries. The ones I'm aware of are Paho and Mosqitto. I believe both have synchronous APIs (eg call connect + block, call send + block, etc) which is usually OK for a simple application. For example, Paho's header (https://github.com/nivertech/paho.mqtt.c/blob/master/src/MQTTClient.h) has "MQTTClient_receive", a synchronous receive function, and "MQTTClient_messageArrived", a callback. 4 years ago I made a labview wrapper for the then-current async version of paho, but its obviously so out of date now I'm not even going to post it. The api is relatively easy to wrap -- for example, here is publish: DLLExport int MQTTClient_publish(MQTTClient[void*] handle, char* topicName, int payloadlen, void* payload, int qos, int retained, MQTTClient_deliveryToken* dt); Everything is either an opaque pointer or an int, making it easy to consume in any language you like. From the client header above, there is a full example and it looks like the hardest part would be constructing some of the options structures (eg replicating "MQTTClient_connectOptions_initializer", a macro, in labview).
  22. So to one specific point...doubles can only represent 17 digits so your worst case string would be "-N.NN....NNE-15" so 23 bytes ~3x overhead. But more to the point in all of this, its a display application so I'd personally expect your conversion code to be much less precise. For the 2d string case..I guess it depends on how much you think is reasonable to support. To my mind if someone says "heres my 10 MB table of data, let me run this magic front panel tool I downloaded from the internet and use it to publish that data" my general reaction would be "good luck with that" Fair enough. I don't have much in the way of web skills either, for something like that.
  23. Talking to a zaber motor in labview is equivalent to python -- you are sending serial commands to a device and waiting for its response. Zaber provides a fairly well done api on their website along with examples for talking to their drives -- I would start there. this may also be helpful: http://www.ni.com/white-paper/4370/en/
  24. While I know 100 is just an example, its worth keeping in mind that 100 doubles is 6400 bits, 7 usec, or 0.00064% of your network bandwidth if updating every second. You can scale that number of elements up pretty far before it becomes important. A reasonable middle ground could be to send [viewed region] +/- N elements. For N=1000 thats about 150 usec transfer time, and for small-to-moderate arrays this would still be the whole thing. For large arrays you can make N configurable so if someone really wants to transmit a 10M element array they can. I disagree about the change detection. Unless the number of elements changing is small (unlikely for measurements) you end up with a lot of transmission overhead (update index X to Y is 50% overhead for doubles) and regardless of the number of elements changing, your processing overhead is high (is Xn=Xn-1, is Yn=Yn-1, etc) although probably negligible.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.