Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. it sounds doable as long as the top level interface doesnt change and there are no dependencies shared from exe to ppl, especially shared parent classes. Not exactly the same, plug a plugin based application worked well with a static exe and rebuilt ppls, except occasionally the ppl child class and exe parent class would stop agreeing and so nothing would load. So long as you're basically using the exe as a http client and launcher, I think you'll be fine.

    • Like 1
  2. On 12/1/2017 at 11:43 PM, TTGrey said:

    The influxdata products look like they are powerful. I'm not sure how they would interface with labview yet. Thanks again.

    Thats actually the easy part, which is why I was thinking about it as an option. The specific help doc is here (https://docs.influxdata.com/influxdb/v0.9/guides/writing_data/) but basically you'd just use the built-in HTTP(s) client library in labview with the url and data string as shown on that page, essentially "my_interesting_measurement,<metadata fields> value=123.456". 

     

    Also for daq.io, it looked like they had a self-hosted option, so it wouldn't be putting in 'the cloud' so much as a local server at your company.

  3. 1 hour ago, TTGrey said:

    The OPC UA option is appealing because the OPC server could also handle the modbus communication. I wasn't aware that there were third party apps that could log the data and then access it through labview APIs. I will definitely look into it. However, it seems like the licensing could still be an issue, especially if we would have to buy a OPC UA license and a DSC runtime license for each deployment.

    I'm still looking at your RTI DDS suggestion but I don't think I have enough of a grasp on it to understand if it's a viable solution.

    For DDS, my (admittedly also vague) understanding is that they use the DDS publish subscribe model to develop various tools which consume the data. A concept diagram (ie marketing material) can be found on this page: https://www.rti.com/products/dds and the addon product would be the database integration service: https://www.rti.com/products/dds/add-on-products
    Looking at it more thoroughly I think its probably way past what you need and geared towards a different use case.

    As for the OPC UA, you probably wouldnt need a DSC runtime license in that situation, since you're using the separate modbus library. To make sure I'm being clear, I would see you running an embedded OPC UA server within your application, which itself could host a short term historical log. For longer term logging I'd imagine using a different vendor with the capability of pulling data from any OPC UA server, using OPC UA as the standard scada protocol backbone for your system. I don't have a specific recommendation for this, but a quick google "OPC UA data historian" comes up with this as an example: https://opcfoundation.org/products/view/prosys-opc-ua-historian and that company also has an opc ua client and this thing: https://www.prosysopc.com/products/opc-ua-modbus-server
    All that having been said, if you have a very small system and licensing costs are a concern, all of the above is probably overkill. A lot of these integrated logging solutions seemed to be geared towards bigger systems and so is the pricing.

    Edit:
    These two occurred to me:
    https://www.daq.io/what-daqio-is/
    Looks like its a sort of integrated solution for historical data using a simple web-based labview API. I cannot speak to it past having seen a demo at some point.

    https://www.influxdata.com/time-series-platform/telegraf/
    I had thought about using this myself at one point, but my goals got redirected. As I recall its a time-series database with visualization tooling "chronograf" and a simple HTTPS interface for inserting data.

    • Like 1
  4. A team at NI has created an application engine which may help do what you want: http://ni.com/dcaf
    Its a configurable plugin engine which maps a plugin (for example there is one which polls modbus values at a periodic rate) to scoped data storage inside of the engine and then maps that data out to other plugins (for example a TDMS writer). It can obviously get more complicated as you add custom logic, but I think they've been doing a pretty good job on getting that to be easier as well. If it sounds helpful, the guys working on it are very accessible so just message them or post in that group. For retrieving data from TDMS and processing it, I think most anyone at NI would recommend diadem, but its not really a scada tool so much as a fancy excel tool -- displaying it on the fly for an operator might be tougher.

    I'm not personally aware of anything that would help with everything. Something that may help partially is the new OPC UA module, licensed separately from DSC (I think its something like $500 for a seat and then maybe $100 for a deploy, if I remember right). I say new because the outside looks the same, but it adds alarms and a historical server, built in. You'd essentially copy your modbus variables into a OPC UA server instance and then clients could read N samples worth of historical data (ie you could maybe store the previous day in memory). Once its in OPC UA land, I would bet you could find some other vendor with a good long term logger.

    Along similar lines, the RTI DDS toolkit is a similar protocol library where the RTI folks sell add-on toolkits, like loggers, which consume the published data. So again you'd read modbus variables, copy them into DDS, and run a third-party service to do the logging and history.

  5. I think if we've learned anything in the past few decades, its that a dedicated attacker can do a lot. I tend to think a password is sufficient -- those people looking for passwords are either:

    • Looking because they have a legitimate reason like they locked themselves out
    • Were never going to pay for it anyway

    So while I don't know your particular market or customers, I wouldn't generally lose sleep over it.

    As to the specific question, this may help:

    http://digital.ni.com/public.nsf/allkb/831F38C46BCBDADE8625793A0054BB19

    It sounds like removing diagrams should be sufficient in line, with the only additional comment I have being that you should remove front panels for any code which may have sensitive data on it -- for example, a license key which gets passed to a function. If someone manages to get that function to open its front panel, I believe its possible to extract the data.

  6. Yeah, so my issue with the 3.5x times is that the visa bytes at port aren't always accurate, especially in my case on linux targets. To give you an example, lets say you have a single master single slave and the master sends 134 bytes. Polling the bytes at port gives you:

    • 0 ms -- 0 bytes
    • 5 -- 20
    • 10 -- 20
    • 15 -- 20
    • 20 -- 40
    • 25 -- 40
    • 30 -- 40
    • etc...

    So if 3.5 character times is (in this example) <15 ms, we'd start trying to parse after only getting 20 bytes. I eventually got hold of linux guys in R&D who explained that this is fundamental to how ni visa and the linux serial driver interact and the behavior couldn't be made more sensible.

    If you imagine this on a multi-device network, its possible that you accidentally merge requests because you have no way of detecting the silence between messages.

    If you just use the CRC it is as mentioned above quite expensive, and its possible (if unlikely) to hit a byte string where the CRC method provides a false positive.

    Assuming you know all of the function codes (I know you don't) the right answer is to try to parse the packet using timeouts as a guide for error handling.

    Since you don't know all the function codes I think you have to go through some hoops to use all three together. IF the function code is known, parse it. If you read the function code and its unknown, use the CRC method in combination with silence times. However the silence has to be pretty flexible and you have to expect that you will miss a silent period if baud rates are high.

  7. 1 hour ago, MarkCG said:

    Just curious, but why bother when there are multiple native LabVIEW implementations already? There was the "original" NI library , the LVOOP NI implementation, and I believe at least one third party implementation. 

    ^^

    3 hours ago, drjdpowell said:

    I'm working on a Modbus Server (aka Slave) implementation for Serial communication.   Has anyone experience with this?   My immediate question is about the "RTU" format.   Standard serial format uses CRLF characters to mark the end of messages, but RTU uses "3.5 characters of silence" on the serial line.   I have no idea how to detect "silence" in a reliable way.  Past Modbus in LabVIEW implementations I've looked at seem to put waits in to create silences, but don't use the silence for defining a received message.  I am worried that this is vulnerable to error.

    Yeah, I can tell you from personal experience that actually trying to use the 3.5 char of silence is not a great idea. You do need a read timeout and polling, but you should parse the packet as you read it to determine how long its supposed to be, and then check the crc. As mentioned above, the serial software stack you're going through (labview->visa->os) is not conducive to waiting for silence. The packet should be parseable and if you just keep in mind that a modbus packet is limited to a pretty tiny maximum size, there are a lot of checks you can do to eliminate issues and basically "close the connection" if a problem is encountered. The issue with that is that both master and slave need to back off appropriately, clear buffers, etc, to make sure that any communication error can be resolved without rebooting the system on either side.

    • Like 1
  8. On 10/13/2017 at 10:38 PM, Aristos Queue said:

    planet581g: Have you contacted NI tech support? That's exactly the kind of thing that we'd like to hear about. I'm not sure why so few people escalate those sorts of issues to us, but I've noticed over the years that if users can restart and keep working, they never report the issue.

    It seems to me there is MASSIVE pressure to avoid escalation of these sorts of issues. You say "people don't escalate these things", I say "AE listens to my issue and says they need an encapsulated reproducible case with a bow on top for them to escalate". 

    On 10/14/2017 at 5:56 PM, Aristos Queue said:

    If nothing else, filing a sufficiently outraged bug report, complete with a rub-your-face-in-it reproduction case (so the bastards at Microsoft can't claim I'm making it up) is really cathartic. Substitute "NI" for "Microsoft"... I bet it can help your morale. :-) 

    Thats interesting...to me, a lot of the issues I have with labview are what I'd describe as "transient showstoppers". For example I had a similar issue where right clicking anything would hang labview, and I needed to go to a class property window. The problem eventually went away, and its not a crash. Because of the AE situation, there is basically no way to provide this feedback to NI in a way they will respond to it. Making a reproducing case is difficult if not impossible, and where I work I can't just provide NI with the whole codebase, so you end up kind of stuck. You might say "a problem like that is hard to solve for any company", and my response is "well it sure seems to happen a lot with labview". 

    Edit: Just got round to this thread, another perfect example: https://lavag.org/topic/20325-slow-index-array-of-classes/

    "As much as I'd love to dig into the LabVIEW memory manager to truly understand what's happening in the dev environment (not), I am just going to put this in the "no longer a problem" column and move on."

    On 10/14/2017 at 7:07 PM, ShaunR said:

    Changing versions is a huge project risk. You may get your old bug fixed (not guaranteed, though) but there will be other new ones and anyone who converts mid-project is insane. In fact. I would argue that anyone who upgrades before SP1 is out is also insane.

    On the first part, changing versions: definitely agree, and this bit me hard last year. However I'm totally on board with using non-sp1 versions. I have never waited and I've not seen any versions where I considered the major to be any different than the SP in terms of reliability. As an example, I would absolutely take 2015-non-sp over 2014 sp1 any day. 2014 was just a horrible year for labview, don't know why.

    On 10/17/2017 at 9:18 AM, shoneill said:

    Any correlation between the segment of the LV user base who adopts non SP1 versions and the ones targetted with the "Programming Optional" marketing?

    146ae5c0ef32fd267ef3c184fc45e7c5.jpg

    4 hours ago, Tim_S said:

    NI started recommending virtual machines at the last seminar I was at. With the different versions of LabVIEW and drivers it's been the only sane way to manage. My IT is balking at the idea as Microsoft sees each VM as a different PC, so one physical (not-server) PC may get hit for many Windows licenses.

    Depending on what you develop, linux VMs are an option. You can request a linux labview copy from your salesperson, I believe (or just calling in to support). 

    • Like 1
  9. Terms like unmaintainable (and, probably more so, the flip side terms of "maintainable", "scalable", etc) bother me because even if they had a meaning, they really mean "code I don't feel like dealing with" (or on the flip side, "this is the style of code I like"). Personally, people who use these terms turn me off to the potentially valid arguments they are making.

    Before you respond: yes, I understand this is exactly why you want a new name. 

    My main point is that they are not in any way specific, and I don't think this thread can help because you are not specific about how this particular function is more difficult to maintain, or about who you think will have sufficient difficulties with the code to call it "unmaintainable". 

    I think the who is especially important because I think all of us have at one point or another done something completely "unmaintainable" but were happy to do it, and happy to "paintain" it, because it solved the problem and was the best we knew. It wasn't all that long ago when I was learning labview and had like 30 boolean control references bundled into an array for dynamic registration and someone said "why dont you just put them in a cluster and get the children"...but I was fine with the 30 stupid controls because it solved my problem and wasn't an important enough part of my code, and I didn't know any better.

    Besides the who, there are vectors of difficulties with maintenance. Is it hard because you end up with a bunch of runtime errors (eg the standard argument against variant payload messages)? Is it hard to teach a newbie? Does any change cascade into 37 dependencies? Are you consuming a globally shared resource with many undocumented writers? These are all valid reasons the code might be "unmaintainable" but they are far more specific.

     

    tl;dr: I vote paintainable. Thats an awesome word. Already added it to spell check.

  10. 1 hour ago, Aristos Queue said:

    Surely that isn't unique to 2017... the breakpoint manager hasn't changed (to the best of my knowledge) at all in several versions.

    Are you sure that is a new issue? 

    I've seen the high cpu issue for years, and never use breakpoint manager. Actually the high CPU load issue is kind of a good sign, as its usually after labview has been open for a long time, which means it hasn't crashed yet.

     

    Personally I haven't seen anything with 2017 which makes me think its less stable, but I don't use it as regularly as I use 2015. The thing that has me annoyed is that after installing the new driver set max crashes on every single close (in addition to the regular max crashes).

  11. 3 hours ago, Aristos Queue said:

    So, I'm not knocking your concern, and it might be a reason not to do the feature. I'll include it in the analysis if I decide to propose this to the team. But I think the concern is already there, so to me, that makes it not something that should restrain improving this aspect of LabVIEW. But, again, others may disagree. :-)

    Fair enough...

    3 hours ago, Aristos Queue said:

    This is a valid concern, but I think it is already a concern. You can get a LOT of performance boost by turning off debugging today.

    Getting off topic, but at least where I'm at lv most often is competing with python or matlab, so speed isn't a major concern for them ;) . I definitely see the speed gains on RT and the real benefit seems to be memory usage these days.

  12. 29 minutes ago, Aristos Queue said:

    I frequently have code in the block diagram to assert X is true or to log information to a file while running. I don't want any of that code in the final product. Right now, it's a major pain to turn it on or off. I toggle debugging on and off all the time.

     

    23 minutes ago, Aristos Queue said:

    PS: There are also regions of the code base that I know and own ...And it kind of is... LabVIEW NXG is being designed with that option in mind, but there aren't plans to do the same for LabVIEW 20xx.

    Yeah I'm on board, I was just throwing out what I thought would be better. I suppose you're right about per function, but per module is handled fairly effectively with conditional symbols by just defining lib_blahblah_dbg. I understand vi_blah_dbg would be more annoying, but then again that depends on how many specific VIs you have to debug at a time. But yeah, I understand neither of those will come to pass in current labview.

    So let me switch to the argument against your feature: The other concern I'd have more generally is that while you may turn debugging off in your builds, the difficulty in actually producing an optimized build using app builder means many clients of your code (or if not yours, then someone else using this feature) will fail to do so, leading to not just unoptimized code, but deliberately slower code which produces weird logging data they don't want. This structure seems like it could lead to many situations where people throw in code they expect not to be run by end users but it does because the end user isn't familiar with how to turn off debugging on all parts of their application. You might say that as a library developer you should ship code with debugging turned off, which I'd agree with if it weren't already a pain to turn debugging on or off...and in my experience a lot of people do exactly what I do, which is to package up code with debugging turned on for all VIs and then use app builder to turn off debugging for a final product. In contrast, while more of a pain in the ass for you, conditional tokens are already a more 'advanced' feature and require active effort to enable them, which makes it more end-user-friendly.

  13. Whats the original use case for this? Unless I missed it you start with "I want this" and move on from there. Maybe I'm in the minority but I don't see a use case for this at all. I definitely don't see a infinity use case.

    If I were in charge of your time, and devoted it so a similar set of features, I'd instead prioritize:

    • Universal debugging checkbox which automatically turns on or off an app-level conditional symbol
    • Per-build conditional symbols
  14. finding targets is relatively easy: http://zone.ni.com/reference/en-XX/help/373107E-01/nisyscfg/find/

    the other part, scripting a target, is officially not possible last time I checked. Theres some internal functions that do it properly (I found them at one point digging through the dialog code), and of course you can edit the project file xml yourself, but nothing great. 

  15. I've never tried to lock any git or hg server so I can't help there, but I also absolutely hate that 'feature' of svn, p4, etc so...

    I find that merge conflicts are rare unless your source is very tightly coupled or monolithic, and of course you can tell if its tightly coupled by how many merge conflicts you have ;)

    But really, you are the only one one who can determine if the merging will be an issue. If you find yourself very often going to your coworkers and saying 'hey unlock that thingy please!' you will absolutely have a bad time with git. If like me thats happened a few times per year, git works fine without locks.

    Edit: might be worth checking out http://www.peterlundgren.com/blog/on-gits-shortcomings/ specifically access control

  16. 24 minutes ago, rolfk said:

    To me it feels like a nice idea that got executed based on a limited test but not quite tested on real world professional sized projects. Or every developer at NI uses only the highest end super powered machines with CPUs that normal money can't buy you :D

    One of these sounds more like NI than the other ;)

    • Like 1
  17. 7 hours ago, ShaunR said:

    Delete the "Bytes at Port" and rewire. Right click on the "Serial Read" and select "Synchronous I/O Mode>Synchronous" (the little clock in the corner of the Read icon will disappear). It will then block until the termination character is received and return all the bytes or timeout. This is the simplest way of getting up and running and you can look at asynchronous reading later if required.

    I don't necessarily disbelieve you (based on the posts I've seen of yours you do wayyy more instrument control than I do), but thats not how the help describes that switch: http://zone.ni.com/reference/en-XX/help/371361J-01/lvinstio/visa_read/ which indicates that this just changes how the execution system works rather than the logical behavior of the node.

    Quote

    Whether the data is read synchronously or asynchronously is platform-dependent. Right-click the node and select Synchronous I/O Mode»Synchronous from the shortcut menu to read data synchronously.

    When you transfer data from or to a hardware driver synchronously, the calling thread is locked for the duration of the data transfer. Depending on the speed of the transfer, this can hinder other processes that require the calling thread. However, if an application requires that the data transfer as quickly as possible, performing the operation synchronously dedicates the calling thread exclusively to this operation.

    note.gif Note  In most applications, synchronous calls are slightly faster when you are communicating with 4 or fewer instruments. Asynchronous operations result in a significantly faster application when you are communicating with 5 or more instruments. The LabVIEW default is asynchronous I/O.

     

  18. (for context, I just said that my real issue in this case is ni's relationship management. I dont necessarily expect my issue to be fixed but I do want to feel like someone gives a damn. But I decided this wasn't a worthwhile rabbit hole to go down so I removed it.)

    4 hours ago, rolfk said:

    but if the problem gets difficult you can run into a brick wall sometimes and the people in support are not allowed to leave the predefined channels even if they don't work.

    I guess my view on this is that the AE role is should be to push against the PSEs as the customer advocate even as the PSEs push back and help prioritize quality issues for their R&D groups. I think you're right that AEs are taught to stay in as you put it this vaguely defined set of channels, but whats the point of hiring engineers to do a support job but then not empowering them to really do the job? To be harsh about it...anybody can google some terms and throw KBs at people until they go away :/

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.