Jump to content

Stobber

Members
  • Posts

    213
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by Stobber

  1. 250 ms is pushing it as that is what the default nagle is. That should not make it back up messages, though, just your heartbeat will timeout sometimes.

    That's definitely happening, and it's the first-tier cause of my headache. I need to get heartbeating working consistently again.

     

     

    If you have hacked the library to use BUFFERED instead of STANDARD then it is probable that you are just not consuming the bytes because a read no longer guarantees that they are removed. That will cause the windows buffer to fill and eventually become unresposive until you remove some bytes.

    Huh...that's good to know. Is there a way to check the buffer without popping from it? I could add a check-after-read while debugging to make sure I'm getting everything I think I'm getting.

  2. As a guess  I would say you probably have a short timeout and the library uses STANDARD mode when reading.

    Do you get this behaviour when the read timeout is large, say, 25 seconds?

    Actually, I'm using a patched version of STM where I fixed that issue by changing all TCP reads to BUFFERED mode. :) So that's not a problem.

     

    I do successfully read in a simple test VI with a large timeout, but debugging in my application which requires a heartbeat in each direction every 250 ms makes the whole thing lag and choke. I'm going to try and write a more complex test VI that does the heartbeating without any other app logic involved.

    I have not messed about with this kind of thing in years, but perhaps it's Nagle's algorithm at play? You can disable it using Win32 calls, see here.

    Let me look into that, too. Thanks!

  3. Thank you all for the help. It turns out that some of the TCP Read calls inside STM.lvlib's VIs were causing error 56 or returning junk bytes when I set the timeout too low. (e.g. "0 ms" in an attempt to create a fast poller that was throttled by a different timeout elsewhere in the loop). When I increased the timeouts on all TCP Read functions, my problems went away. Well, that's how it looks right now, anyway.

     

    Incidentally, setting a timeout of 0 ms works fine if I'm asking client and server to talk over the same network interface on the same PC. That kind of makes sense.

     

    Update: I'm now having serious problems with backlogged messages. I get messages out of the TCP Read buffer in bursts, and it seems the backlog grows constantly while running my app. This is breaking the heartbeat I'm supposed to send over the link, so each application thinks the other has gone dead after a second or two. Anybody know what might cause the TCP connection to lag so badly?

  4. Network streams?

    I used them for two years, and they repeatedly failed on their promise. Lots of errors thrown by the API that the application had to be protected from (including error codes documented only for Shared Variables?!), lots of issues with reconnection from the same app instance after an unexpected disconnection, lots of issues with namespacing endpoints, element buffering, etc. I never kept detailed notes on all of it to share with others, but the decision to use raw TCP was actually a decision to get the hell off Network Streams.

     

    A listener has to be running for a client to connect via TCPIP otherwise you should get an error 63 (connection refused). That's just the way TCPIP works. It is up to the client to retry [open] connection attempts in the event of a connect failure but the general assumption is that there is an existing listener (service) on the target port of the server that the client is attempting to connect to.

     

    N.B.

    Regardless of the purpose of the data.to be sent or recieved. In TCPIP, Client = Open and Server = Listen

     

    Right, thanks. Glad to know that observation makes sense. Now to debug the part where a connection is closed on the VxWorks target between "Listen" and "Read" for no apparent reason...

  5. Why are you removing diagrams from the VIs? If you don't remove the diagrams, then you can use the externals/submodules  features quite happily in all [forward] LabVIEW versions. The issue with LabVIEW is re-linking, If a  VI changes or is a different version, then LabVIEW usually wants to recompile the entire hierarchy including those you pulled out from other projects. The separation of compiled code from the diagrams has eased but not eliminated that aspect.

     

    I'm not removing diagrams. If I take a VI saved with LV 2014 SP1 and try to open it with LV 2013 SP1, I'm greeted with an error dialog that says I can't open it. So if modifications to my reuse library are made and saved in the latest version of LV, any existing project that wants to import those mods will have to be upgraded to the latest version. This can be catastrophic for some of my long-lived projects that are still running on 2012SP1 or even older.

  6. A sage LV developer recommended that I stop building my internal LV reuse libraries into VI packages and just link to them from my project repos as git submodules. Sounds interesting, especially given how much work I put into creating, testing, distributing, and documenting VIP and VIPC files. I poked around with svn externals a little bit years ago and thought they were neat. So I went snooping around the blogosphere for awhile to see what git submodules are all about.

     

    Submodules in git are pointers to a specific commit in a repository at a specific URL. This is evidently different than externals in svn, but both are intended to solve the same set of issues.

     

    http://somethingsinistral.net/blog/git-submodules-are-probably-not-the-answer

    [T]here are a number of use cases for [submodules], and they all center around nested git modules that are much more static in nature and use.

    One - you have a component or subproject where the project is undergoing extremely rapid change or is unstable, so you actually want to be locked to a specific commit for your own safety. Or, perhaps a project is breaking backwards compatibility of their API and you don’t want to have to deal with that till they stabilize their code. In this case, git submodules, being reasonably static, are protecting you because of that static nature. You don’t have to clone the outside component and switch to a specific branch or go to any of that hassle - things just work.

    Two - you have a subproject or component that you’re either vendoring or isn’t being updated too often, and you just want an easy copy on hand. To provide an example, in my dot files, if there’s a vim plugin I want I can just add it as a git submodule, and it’s done. I don’t care about the history. I don’t need to be at the latest version. I don’t plan on doing a lot of work on that code myself. Since this entire workflow is static, things work fine.

    Three - There’s a part of your repository that you’re delegating to another party. Let’s say you’re paying someone to write a plugin for a project you’re using, and you need to develop on the main codebase. In this case, the plugin repository is chiefly developed by the plugin developers, so they own the repo and periodically they’ll tell you when to update submodule commits. Submodules are great for dividing responsibilities like this, assuming that there’s not frequent updating.

     

    Right on! #2 seems right up my alley.

     

    However, everyone in the git universe abandoned them over the last couple of years because of several issues with brittleness and complex git workflows.

     

    https://github.com/cristibalan/braid

    Vendoring allows you take the source code of an external library and ensure it's version controlled along with the main project. This is in contrast to including a reference to a packaged version of an external library that is available in a binary artifact repository such as Maven Central, RubyGems or NPM [ed: or VI Package Network].

    Vendoring is useful when you need to patch or customize the external libraries or the external library is expected to co-evolve with the main project. The developer can make changes to the main project and patch the library in a single commit.

    The problem arises when the external library makes changes that you want to integrate into your local vendored version or the developer makes changes to the local version that they want integrated into the external library.

     

    Still, I can't help but wonder if these complaints would be minimized in my use case. Then a really important, LabVIEW-specific issue hit me:

     

    LV code is compiled to its version, and it can't be opened by an older version of the LV dev environment. I wouldn't be able to push changes made in a local submodule back to the remote library repo unless my code were back-saved to the remote's version of LV. (For example, I made a pull request against NI's STM repository recently but had to leave my code in a later version than theirs.)

     

    How the heck do we use tools like submodules/externals and the fork-pull workflow without being forced to upgrade all our LV code to the latest version (and re-test it, often against RT or FPGA hardware) every year when NI releases a new runtime? That's a dealbreaker for me because of the risk involved in upgrading existing projects to new drivers, runtimes, etc. (I've seen bugs introduced with a new version of LV that broke previously working code.)

     

  7. I have a small LV code library that wraps the STM and VISA/TCP APIs to make a "network actor". My library dynamically launches a reentrant VI using the ACBR node that establishes a TCP connection as client or server, then uses the connection to poll for inbound messages and send outbound ones.

     

    When I try to establish a connection using my Windows 7 PC as the client and my sbRIO (VxWorks) as the server, it connects and pushes 100 messages from the client if the server was listening first. If the client spins up first, sometimes it works and sometimes I get error 56 from the "TCP Open Connection" primitive repeatedly. Other times I've seen error 66.

     

    When I try to make Windows act as the server, the sbRIO client connects without error, but it returns error 56 from "TCP Read" on the first call thereafter (inside "STM.lvlib:Read Meta Data (TCP Clst).vi"). This happens in whichever order I run them.

     

    This test, and all others I've written for the API, works just fine when both client and server are on the local machine.

     

    -------------------------------

     

    I'm tearing my hair out trying to get a reliable connection-establishment routine going. What don't I know? What am I missing? Is there a "standard" way to code the server and client so they'll establish a connection reliably between an RT and a non-RT peer?

     

  8. He, who manually merges conflicted .lvclass, lvlib or .lvproj files - successfully! - shall be my overlord.

    How many times have I considered changing mime-type of those beasts to binary just to avoid the pain that inevitably comes with a (what svn thinks was a) successful, automatic merge of them.

     

    I do mark them binary in my repos. You can't even manually merge most changes because there are interrelated tags and mystery hashes designed into those file types. They aren't "pure" XML.

  9. I use GIT and i'm sticking with it.  The ease of branching is what made me switch from SVN.  Making a branch, testing a change and merging it back in is fast and easy. 

     

    If you're the only developer, sure. But merging changes to any of the impure XML files NI uses (.lvclass, .lvlib, .lvproj) or to the same VI is basically impossible. I've always ended up just stomping on one developer's changes instead of trying to merge them.

  10. I'm building a 64-bit binary using LVx64. My build tool uses the ZLIB package. I could change it to use the command line, but that complicates distribution of the tool.

     

    A counter to your post, since we're questioning one another's motivations (or possibly justifications for making a request?): What use is this information anyway? Am I missing something obvious?

  11. Sorry for not replying to update everyone. I have some quick and dirty code working to create junctions (a type of folder-based symlink in Windows NTFS) from specific config and data paths to my Dropbox folders. I'm not looking to share anything related to build or things that should be in a project repository; just general config and customizations. I started work on the code to detect and remove the junctions so the user can "unlink" a computer/VM, but it involves UAC elevations and a bunch of calls into the file system to figure out whether a given path is a reparse point or not. I'll post the proof of concept here when I have it working (or the final state of it if I don't finish before Christmas, so someone else can pick it up and keep poking at it).

  12. Has anyone put effort into defining a roaming environment configuration for LV? I work on several virtual machines, and keeping my LV settings, probes, glyphs, QD extensions, labview.ini tokens, etc. consistent across all of them is a nuisance. I'd like to put all that stuff in Dropbox so it syncs automatically. Are there known drawbacks or existing tools I should know about before I funnel several hours of tinkering into it?

    • Like 1
  13. I had never noticed that JSON allows, but does not require, / to be escaped as \/.   Googling suggests this is due to some use of JSON embedded in other formats such as HTML that do use / as a control character.  I believe our library as it stands should accept both versions.   But what should it do on output?

    Accepting both and outputting the spec-conformant version seems like the right thing to do, especially if that version is smaller and can be modified by the application to add escaping for that character as needed.

  14. So I would recommend simply encapsulating the communications to your FPGA target and things work better.

     

    Huh? How? By wrapping them in subVIs? That still statically links to the NI-RIO xnodes. I need a way to dynamically load the code that calls NI-RIO at runtime.

     

    I ended up turning my IO API library into a class and abstracting it from the application with the Factory Pattern, but it took a lot of dumb boilerplate code, and now there's quite a bit of code duplicated between the Physical IO class and the Simulated IO class. I still hate it, but it works for now.

  15. Quite painfull if you try to built a plugin interface.

     

    And it's not really unintuitive. It was the only way to open a VI Reference for use with the Call By Reference node, before the static VI refnum was introduced.

     

    Lots of things about building plugin interfaces in LabVIEW are quite painful. :lol:

     

    So it's intuitive only to someone who started using LV before the static VI refnum was introduced...what percentage of total users do y'all constitute? As a LV 7.1+ power user, I had to post on LAVA to figure it out. I'd say that presents a case for reconsidering the UX on this feature.

  16. I have an RT app. A pretty big one. It does a lot of stuff.

     

    I have a Windows app that talks to my RT app. In the interest of dividing labor, creating automatic unit/subsystem tests, allowing multiple devs to build on their own machines, etc. I need to be able to simulate the RT program on Windows.

     

    So I abstracted all the calls to NI-RIO using Conditional Disable Structures and added a "Simulator" VI that wraps the entire application while hooking into some custom messages so it can display information to the user. When I need to run on Windows, the RIO calls don't execute and the Simulator panel ties into the network connections. When I need to run on RT, the Simulator panel isn't called and the RIO calls execute. It works great....except on machines that don't have the LVFPGA module installed (because they only develop the Windows app). Even though the ConDis structures prevent code from being called, it still gets statically linked into the app. This prevents a Windows-only system from building the Windows app: it throws errors while looking for FPGA Module components.

     

    What's the best way to abstract the FPGA reference wire from the code to work around this? I hate the idea of creating a class hierarchy for my IO calls and dynamically loading the class at runtime because dynamically launched code is nearly impossible to debug on LVRT.

  17. I want to use the ACBR nodes to launch and unload a VI, but they need a strict reference and I only know how to create one of those using a Static VI Reference. Statically linking to these VIs causes some build issues on other people's machines, though. Do I have to go oldschool and use the "Run" method with a non-static reference, or is there a way to get a strict reference without statically linking to the VI that I want to launch?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.