Jump to content

smithd

Members
  • Content Count

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. I saw that, it looked like it provided a mechanism to get the password back (where applicable). There are of course other login types these days like smart cards but thats kind of cheating in this case since Viper was specifically talking about passwords.
  2. I was curious about this -- I wasn't able to find any example of code which offloads all the password handling to windows. You can open the standard windows login dialog, but you still get the password back. From a more realistic perspective, this is to be expected. When you use single-sign on from a browser, chrome or firefox has access to your password. When you log into your bank, google has your information. If you use a bad browser plugin, someone in the ukraine might have your information.
  3. For the framing errors I'd suggest adding in the code from A here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L38SAE VISA is probably caching errors in the background, so you want to basically clear out any old data and clear any framing errors during initialization. Once thats done, I would expect VISA to work equivalently to your terminal program. In fact, you might see if this works (as a starting point) -- that will tell you whether or not visa is the problem: https://forums.ni.com/t5/Washington-Community-Group/LabVIEW-Hyperterminal/gpm-p/3511381 Once you hav
  4. your understanding seems correct to me, 1% error from a daq card is too damn high
  5. You could trigger manually as shown here: http://zone.ni.com/reference/en-XX/help/373197L-01/criodevicehelp/conversion_timing/ with a big delay between samples, and see if that improves at all. If you are using scan mode, you can change the min time setting here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kGRtSAM also please verify you have the module grounded: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019KrCSAU and also if you are using scan mode: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LBlSAM And you shou
  6. NIPM just uses the opkg format, like their linux distribution. The limitation there is that its geared towards system installs, which is why you have to point it at the specific labview version (eg in NIPM they might have NI Scope support for labview 2015, NI scope support for labview 2016, etc..). GPM is as you said geared towards a more sane project-oriented development workflow vs a global code repo.
  7. Neither of those seem to have a license associated with them, while candidus' has a defined license Also, not the same, but a neat nearby concept: http://sine.ni.com/nips/cds/view/p/lang/en/nid/216348
  8. I don't understand a lot of what you're trying to do, but the serial framing error isn't likely a visa problem. Are you sure you're configuring the port correctly with the right baud, stop bits, etc? If so, look here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019L38SAE As for why its stopping after you enter the loop when you highlight execution, you're telling the loop to wait for 15 minutes (1,000,000 ms) between reads. That is probably not what you want. In fact, you don't need to have a wait in that at all, because you are already waiting on the VISA read. If y
  9. You could ask that about MQTT in general. It seems to be a protocol that just sort of slid into its current position as 'the iot protocol' almost by accident. Specifically vs DDS, MQTT is a lot simpler and easier for a normal person to understand ("ok now send a structure with a command name and a payload string" should sound familiar to everyone, I think), and it works over TCP, making it potentially more network-friendly than the UDP-based DDS. For me personally, I feel like some of the marketing is hard to believe -- when people start making claims about 'deterministic' performanc
  10. I've always done it the terrible way which is why gpm is exciting to me -- use vipm to install your reuse code in vi.lib and then completely uninstall it to make changes, then rebuild. You can sometimes keep it simple by writing reuse code in version N while you write your applications in N+M, but that isn't always feasible if NI releases something worth upgrading for (vims). I've tried subtrees in source control, but I seem to always reach a diamond pattern which breaks the subtree. I don't see maintenance as being particularly hard with the gpm style. Also, while I'm not a web dev
  11. Tell you what, we can chat about this again when Jenkins puts out its first plugin specifically to work with GPM or VIPM. Until that point, you need a CLI or, even better, what people used to do -- a batch file which wraps labview You can put some steps if you want into your install process that tell your user to "open services.msc, then look for the service 'blah co's sweet blah blah app', right click, select stop, and wait for it to finish, then return to this wizard". For me, I'd prefer to put in my package a system exec call "sc stop mysweetapp" and be done with it. As for mak
  12. I thought I saw a UI in the video, but the main point is that this is intended for people with automated processes...so a CLI is the correct answer. Yeah I certainly had hope that NIPM would replace VIPM. I've occasionally heard complaints about NI stomping on products alliance partners have developed and I'm sure that does suck, but it amazes me that something so fundamental to NI's software platform is the place where they decide to draw the line. Its technically possible to build a source distro and then convert that into an NIPM package (basically what VIPM does as I understan
  13. A semi-common way of doing this is to make github pages with lists -- these github pages are themselves git repos and thus can be modified by anyone, just like a wiki. example: https://github.com/rust-unofficial/awesome-rust https://github.com/bulutyazilim/awesome-datascience https://github.com/node-opcua/awesome-iot meta: https://github.com/sindresorhus/awesome Obviously a lot of these are poorly organized, including the data science one, but...
  14. MIT/BSD is the traditional answer but the real one is "you should license your code". That means at minimum having the full legal text (not just writing "BSD" in the VIPM field) of the license in your repository/package/distribution, with a proper name (not [enter name here] as I've seen sometimes), and ideally adding it to each separate file. Github provides https://choosealicense.com/ as a helpful guide OMFG this is great. to MGI. I've often thought about this, specifically the per-project install but the amount of effort I would have to personally invest was too high. I
  15. section 2.2 here may apply: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P6vdSAC
  16. I'd stick to exactly what it says in the exam guide. Specific to your question #2 I've bolded an important item. As I understand it, there is at least a VI analyzer pre-check for the exam followed by two human graders. I've underlined items below that I think a computer can easily check. Its impossible to know how the graders work, but I would guess that they use the VI analyzer to point them at obvious errors, so if you don't get hit by the analyzer you might not get hit by the human grader either. Note that, just as in real life, you can pass with zero documentation. (the above is
  17. You would not build a broker/server implementation in labview, you would use an off the shelf solution like mosquitto. I'm assuming you mean a client implementation. For clients, the protocol is deliberately pretty simple, for low power devices, so I wouldn't feel uncomfortable using one of the labview implementations out there after testing it a little. If you don't feel comfortable, the real answer is to use one of mature C libraries. The ones I'm aware of are Paho and Mosqitto. I believe both have synchronous APIs (eg call connect + block, call send + block, etc) which is usually
  18. So to one specific point...doubles can only represent 17 digits so your worst case string would be "-N.NN....NNE-15" so 23 bytes ~3x overhead. But more to the point in all of this, its a display application so I'd personally expect your conversion code to be much less precise. For the 2d string case..I guess it depends on how much you think is reasonable to support. To my mind if someone says "heres my 10 MB table of data, let me run this magic front panel tool I downloaded from the internet and use it to publish that data" my general reaction would be "good luck with that" Fair e
  19. Talking to a zaber motor in labview is equivalent to python -- you are sending serial commands to a device and waiting for its response. Zaber provides a fairly well done api on their website along with examples for talking to their drives -- I would start there. this may also be helpful: http://www.ni.com/white-paper/4370/en/
  20. While I know 100 is just an example, its worth keeping in mind that 100 doubles is 6400 bits, 7 usec, or 0.00064% of your network bandwidth if updating every second. You can scale that number of elements up pretty far before it becomes important. A reasonable middle ground could be to send [viewed region] +/- N elements. For N=1000 thats about 150 usec transfer time, and for small-to-moderate arrays this would still be the whole thing. For large arrays you can make N configurable so if someone really wants to transmit a 10M element array they can. I disagree about the change detection. U
  21. Looks like you might have an unsupported excel version? although this says 2016 should be compatible... http://www.ni.com/product-documentation/54029/en/ The fast answer is that you should try to click on each of the broken nodes and see if you can re-select whatever is shown. So for example click on the "chart" node and see if "application" is an option, and so on. I've done this before with "incompatible" versions (eg excel 2016 with report gen 2014, or database toolkit in labview 64-bit) and its worked, but obviously you get no guarantees. There are also other excel toolkits out t
  22. Yeah I suppose splitting it up theres three sorts Obvious issues: things you can get down to a simple test case, or easily describe (eg when I use quickdrop to insert delete from array onto a wire, it isn't correctly wired) Crashes: They already have NIER. It would be interesting to see how common a crash you just got is, but at least they know (for the x% of labview instances that are on a network and not firewall blocked). Wtfs: This is what you described, generally one-off issues And of course I have no clue what proportion of issues fall into which bucket. Take crashe
  23. One thing I absolutely wish is that NI would have a publicly facing issue tracker. For example, the jira for jira: https://jira.atlassian.com/projects/JRASERVER/issues/JRASERVER-67872?filter=allopenissues or the bugzilla for mozilla: https://bugzilla.mozilla.org/buglist.cgi?query_format=specific&order=relevance+desc&bug_status=__open__&product=Firefox&content=crash&comments=0 I've heard the NI argument against older issues -- there may be internal or customer-specific information, but I don't see that as an excuse moving forward. Or, another way, make an idea
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.