Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. it sounds doable as long as the top level interface doesnt change and there are no dependencies shared from exe to ppl, especially shared parent classes. Not exactly the same, plug a plugin based application worked well with a static exe and rebuilt ppls, except occasionally the ppl child class and exe parent class would stop agreeing and so nothing would load. So long as you're basically using the exe as a http client and launcher, I think you'll be fine.
  2. Thats actually the easy part, which is why I was thinking about it as an option. The specific help doc is here (https://docs.influxdata.com/influxdb/v0.9/guides/writing_data/) but basically you'd just use the built-in HTTP(s) client library in labview with the url and data string as shown on that page, essentially "my_interesting_measurement,<metadata fields> value=123.456". Also for daq.io, it looked like they had a self-hosted option, so it wouldn't be putting in 'the cloud' so much as a local server at your company.
  3. For DDS, my (admittedly also vague) understanding is that they use the DDS publish subscribe model to develop various tools which consume the data. A concept diagram (ie marketing material) can be found on this page: https://www.rti.com/products/dds and the addon product would be the database integration service: https://www.rti.com/products/dds/add-on-products Looking at it more thoroughly I think its probably way past what you need and geared towards a different use case. As for the OPC UA, you probably wouldnt need a DSC runtime license in that situation, since you're using the separate modbus library. To make sure I'm being clear, I would see you running an embedded OPC UA server within your application, which itself could host a short term historical log. For longer term logging I'd imagine using a different vendor with the capability of pulling data from any OPC UA server, using OPC UA as the standard scada protocol backbone for your system. I don't have a specific recommendation for this, but a quick google "OPC UA data historian" comes up with this as an example: https://opcfoundation.org/products/view/prosys-opc-ua-historian and that company also has an opc ua client and this thing: https://www.prosysopc.com/products/opc-ua-modbus-server All that having been said, if you have a very small system and licensing costs are a concern, all of the above is probably overkill. A lot of these integrated logging solutions seemed to be geared towards bigger systems and so is the pricing. Edit: These two occurred to me: https://www.daq.io/what-daqio-is/ Looks like its a sort of integrated solution for historical data using a simple web-based labview API. I cannot speak to it past having seen a demo at some point. https://www.influxdata.com/time-series-platform/telegraf/ I had thought about using this myself at one point, but my goals got redirected. As I recall its a time-series database with visualization tooling "chronograf" and a simple HTTPS interface for inserting data.
  4. A team at NI has created an application engine which may help do what you want: http://ni.com/dcafIts a configurable plugin engine which maps a plugin (for example there is one which polls modbus values at a periodic rate) to scoped data storage inside of the engine and then maps that data out to other plugins (for example a TDMS writer). It can obviously get more complicated as you add custom logic, but I think they've been doing a pretty good job on getting that to be easier as well. If it sounds helpful, the guys working on it are very accessible so just message them or post in that group. For retrieving data from TDMS and processing it, I think most anyone at NI would recommend diadem, but its not really a scada tool so much as a fancy excel tool -- displaying it on the fly for an operator might be tougher. I'm not personally aware of anything that would help with everything. Something that may help partially is the new OPC UA module, licensed separately from DSC (I think its something like $500 for a seat and then maybe $100 for a deploy, if I remember right). I say new because the outside looks the same, but it adds alarms and a historical server, built in. You'd essentially copy your modbus variables into a OPC UA server instance and then clients could read N samples worth of historical data (ie you could maybe store the previous day in memory). Once its in OPC UA land, I would bet you could find some other vendor with a good long term logger. Along similar lines, the RTI DDS toolkit is a similar protocol library where the RTI folks sell add-on toolkits, like loggers, which consume the published data. So again you'd read modbus variables, copy them into DDS, and run a third-party service to do the logging and history.
  5. I think if we've learned anything in the past few decades, its that a dedicated attacker can do a lot. I tend to think a password is sufficient -- those people looking for passwords are either: Looking because they have a legitimate reason like they locked themselves out Were never going to pay for it anyway So while I don't know your particular market or customers, I wouldn't generally lose sleep over it. As to the specific question, this may help: http://digital.ni.com/public.nsf/allkb/831F38C46BCBDADE8625793A0054BB19 It sounds like removing diagrams should be sufficient in line, with the only additional comment I have being that you should remove front panels for any code which may have sensitive data on it -- for example, a license key which gets passed to a function. If someone manages to get that function to open its front panel, I believe its possible to extract the data.
  6. Yeah, so my issue with the 3.5x times is that the visa bytes at port aren't always accurate, especially in my case on linux targets. To give you an example, lets say you have a single master single slave and the master sends 134 bytes. Polling the bytes at port gives you: 0 ms -- 0 bytes 5 -- 20 10 -- 20 15 -- 20 20 -- 40 25 -- 40 30 -- 40 etc... So if 3.5 character times is (in this example) <15 ms, we'd start trying to parse after only getting 20 bytes. I eventually got hold of linux guys in R&D who explained that this is fundamental to how ni visa and the linux serial driver interact and the behavior couldn't be made more sensible. If you imagine this on a multi-device network, its possible that you accidentally merge requests because you have no way of detecting the silence between messages. If you just use the CRC it is as mentioned above quite expensive, and its possible (if unlikely) to hit a byte string where the CRC method provides a false positive. Assuming you know all of the function codes (I know you don't) the right answer is to try to parse the packet using timeouts as a guide for error handling. Since you don't know all the function codes I think you have to go through some hoops to use all three together. IF the function code is known, parse it. If you read the function code and its unknown, use the CRC method in combination with silence times. However the silence has to be pretty flexible and you have to expect that you will miss a silent period if baud rates are high.
  7. ^^ Yeah, I can tell you from personal experience that actually trying to use the 3.5 char of silence is not a great idea. You do need a read timeout and polling, but you should parse the packet as you read it to determine how long its supposed to be, and then check the crc. As mentioned above, the serial software stack you're going through (labview->visa->os) is not conducive to waiting for silence. The packet should be parseable and if you just keep in mind that a modbus packet is limited to a pretty tiny maximum size, there are a lot of checks you can do to eliminate issues and basically "close the connection" if a problem is encountered. The issue with that is that both master and slave need to back off appropriately, clear buffers, etc, to make sure that any communication error can be resolved without rebooting the system on either side.
  8. I still see this in various versions. It usually goes away if you clear cache or restart labview, I think the most extreme I've had to do is create a new project.
  9. It seems to me there is MASSIVE pressure to avoid escalation of these sorts of issues. You say "people don't escalate these things", I say "AE listens to my issue and says they need an encapsulated reproducible case with a bow on top for them to escalate". Thats interesting...to me, a lot of the issues I have with labview are what I'd describe as "transient showstoppers". For example I had a similar issue where right clicking anything would hang labview, and I needed to go to a class property window. The problem eventually went away, and its not a crash. Because of the AE situation, there is basically no way to provide this feedback to NI in a way they will respond to it. Making a reproducing case is difficult if not impossible, and where I work I can't just provide NI with the whole codebase, so you end up kind of stuck. You might say "a problem like that is hard to solve for any company", and my response is "well it sure seems to happen a lot with labview". Edit: Just got round to this thread, another perfect example: https://lavag.org/topic/20325-slow-index-array-of-classes/ "As much as I'd love to dig into the LabVIEW memory manager to truly understand what's happening in the dev environment (not), I am just going to put this in the "no longer a problem" column and move on." On the first part, changing versions: definitely agree, and this bit me hard last year. However I'm totally on board with using non-sp1 versions. I have never waited and I've not seen any versions where I considered the major to be any different than the SP in terms of reliability. As an example, I would absolutely take 2015-non-sp over 2014 sp1 any day. 2014 was just a horrible year for labview, don't know why. Depending on what you develop, linux VMs are an option. You can request a linux labview copy from your salesperson, I believe (or just calling in to support).
  10. Not sure, but have you tried it against another non-NI client? A quick google finds this as a possible free option: http://www.matrikonopc.com/products/opc-desktop-tools/opc-explorer.aspx
  11. Terms like unmaintainable (and, probably more so, the flip side terms of "maintainable", "scalable", etc) bother me because even if they had a meaning, they really mean "code I don't feel like dealing with" (or on the flip side, "this is the style of code I like"). Personally, people who use these terms turn me off to the potentially valid arguments they are making. Before you respond: yes, I understand this is exactly why you want a new name. My main point is that they are not in any way specific, and I don't think this thread can help because you are not specific about how this particular function is more difficult to maintain, or about who you think will have sufficient difficulties with the code to call it "unmaintainable". I think the who is especially important because I think all of us have at one point or another done something completely "unmaintainable" but were happy to do it, and happy to "paintain" it, because it solved the problem and was the best we knew. It wasn't all that long ago when I was learning labview and had like 30 boolean control references bundled into an array for dynamic registration and someone said "why dont you just put them in a cluster and get the children"...but I was fine with the 30 stupid controls because it solved my problem and wasn't an important enough part of my code, and I didn't know any better. Besides the who, there are vectors of difficulties with maintenance. Is it hard because you end up with a bunch of runtime errors (eg the standard argument against variant payload messages)? Is it hard to teach a newbie? Does any change cascade into 37 dependencies? Are you consuming a globally shared resource with many undocumented writers? These are all valid reasons the code might be "unmaintainable" but they are far more specific. tl;dr: I vote paintainable. Thats an awesome word. Already added it to spell check.
  12. I've seen the high cpu issue for years, and never use breakpoint manager. Actually the high CPU load issue is kind of a good sign, as its usually after labview has been open for a long time, which means it hasn't crashed yet. Personally I haven't seen anything with 2017 which makes me think its less stable, but I don't use it as regularly as I use 2015. The thing that has me annoyed is that after installing the new driver set max crashes on every single close (in addition to the regular max crashes).
  13. Fair enough... Getting off topic, but at least where I'm at lv most often is competing with python or matlab, so speed isn't a major concern for them . I definitely see the speed gains on RT and the real benefit seems to be memory usage these days.
  14. Yeah I'm on board, I was just throwing out what I thought would be better. I suppose you're right about per function, but per module is handled fairly effectively with conditional symbols by just defining lib_blahblah_dbg. I understand vi_blah_dbg would be more annoying, but then again that depends on how many specific VIs you have to debug at a time. But yeah, I understand neither of those will come to pass in current labview. So let me switch to the argument against your feature: The other concern I'd have more generally is that while you may turn debugging off in your builds, the difficulty in actually producing an optimized build using app builder means many clients of your code (or if not yours, then someone else using this feature) will fail to do so, leading to not just unoptimized code, but deliberately slower code which produces weird logging data they don't want. This structure seems like it could lead to many situations where people throw in code they expect not to be run by end users but it does because the end user isn't familiar with how to turn off debugging on all parts of their application. You might say that as a library developer you should ship code with debugging turned off, which I'd agree with if it weren't already a pain to turn debugging on or off...and in my experience a lot of people do exactly what I do, which is to package up code with debugging turned on for all VIs and then use app builder to turn off debugging for a final product. In contrast, while more of a pain in the ass for you, conditional tokens are already a more 'advanced' feature and require active effort to enable them, which makes it more end-user-friendly.
  15. Whats the original use case for this? Unless I missed it you start with "I want this" and move on from there. Maybe I'm in the minority but I don't see a use case for this at all. I definitely don't see a infinity use case. If I were in charge of your time, and devoted it so a similar set of features, I'd instead prioritize: Universal debugging checkbox which automatically turns on or off an app-level conditional symbol Per-build conditional symbols
  16. The xnode right click menu (i forget the ini key) lets you see the generated code. I'd suggest just opening the butterworth generated code and copying it directly onto an inline vi diagram, and see if that works or has an issue.
  17. may or may not be sufficient, but since mat files are just hdf5, this library http://h5labview.sourceforge.net has an example for reading and writing the special matlab metadata.
  18. I am using those VIs because I can't update to 2017, so I guess this is the part where I just keep my fork separate.
  19. finding targets is relatively easy: http://zone.ni.com/reference/en-XX/help/373107E-01/nisyscfg/find/ the other part, scripting a target, is officially not possible last time I checked. Theres some internal functions that do it properly (I found them at one point digging through the dialog code), and of course you can edit the project file xml yourself, but nothing great.
  20. its difficult to trust a table like that. to specifically call out one obvious issue...if someone has a copy of your repo, no you cant control access. But the access control doesn't happen at git or hg, it happens at the server. All of those listed servers have reasonable access controls.
  21. I've never tried to lock any git or hg server so I can't help there, but I also absolutely hate that 'feature' of svn, p4, etc so... I find that merge conflicts are rare unless your source is very tightly coupled or monolithic, and of course you can tell if its tightly coupled by how many merge conflicts you have But really, you are the only one one who can determine if the merging will be an issue. If you find yourself very often going to your coworkers and saying 'hey unlock that thingy please!' you will absolutely have a bad time with git. If like me thats happened a few times per year, git works fine without locks. Edit: might be worth checking out http://www.peterlundgren.com/blog/on-gits-shortcomings/ specifically access control
  22. One of these sounds more like NI than the other
  23. I don't necessarily disbelieve you (based on the posts I've seen of yours you do wayyy more instrument control than I do), but thats not how the help describes that switch: http://zone.ni.com/reference/en-XX/help/371361J-01/lvinstio/visa_read/ which indicates that this just changes how the execution system works rather than the logical behavior of the node.
  24. (for context, I just said that my real issue in this case is ni's relationship management. I dont necessarily expect my issue to be fixed but I do want to feel like someone gives a damn. But I decided this wasn't a worthwhile rabbit hole to go down so I removed it.) I guess my view on this is that the AE role is should be to push against the PSEs as the customer advocate even as the PSEs push back and help prioritize quality issues for their R&D groups. I think you're right that AEs are taught to stay in as you put it this vaguely defined set of channels, but whats the point of hiring engineers to do a support job but then not empowering them to really do the job? To be harsh about it...anybody can google some terms and throw KBs at people until they go away :/
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.