Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by JKSH

  1. I suggest spending most of your energy on choosing a VCS for your team, as this is the part that takes the most effort for your team to adopt. I'd say that the other aspects (project management/issue tracking, documentation) have a lower barrier to entry. Can you elaborate on which parts of the centralized model are most important to you? I'm guessing it's because a distributed VCS (DVCS) can do just about everything that a centralized VCS can, but the converse is not true (hence my previous question). As a result, the online community (which is far more visible than corporations) are moving to a DVCS. The 2 reasons I can think of for a company to stick to a centralized VCS instead of switching to a DVCS are: Inertia. If the developers are already familiar with an existing tool, there is a cost to switch. They want an extremely high level of control and security over the code base. A centralized VCS makes it a bit harder for a rogue employee to make off which the whole commit history (but it doesn't stop them from taking the current snapshot). If the separate applications interface with each other well, how important is it to still have a single application/platform? Does your team have any existing interactions with the software engineering group? Can you get any support from them? Do you anticipate your team working with them in the future? If so, then the best choice for your team might be whatever the software engineering group is already using. That provides a lower barrier for collaboration between both groups. If you expect to be completely isolated from the software engineering group, then I'm guessing there is not much difference between the possible solutions you have listed. All of them will come with an initial learning curve; the important thing is to pick one, get everyone on board, and get familiarized together. I believe all modern hosting platforms support this. Be aware that none of the common VCSes were designed to work with something like LabVIEW; they were all designed to work with text-based code. So, regardless of which VCS you choose, LabVIEW devs must learn to take a bit more care to avoid triggering conflicts (and learn to handle the conflicts once they occur). How "big" are your team's projects? How often do you produce a new release? Are there parts of your release process where you go, "Man, this part is tedious and error-prone... It would be great to automate this"? CI is most useful when you have a lot of people working on the same code base and/or you have teammates who churn out commits at lightning speed. It can still be beneficial for small teams, but the impact is less pronounced (and the cost-to-benefit ratio is higher) CD is most useful when you want to release often, and/or your release process is tedious. DevOps is most useful for a large organization who wants better collaboration between their developers and their operators, and who want to make deployment more efficient. As you described yourself as a "small team with a badly overdue need for SCC", I suspect these are lower priority for you right now. Again, getting SCC in place first will probably be the most helpful; the automations can always be added after you've tamed the chaos.
  2. Spoken like a true LabVIEW dev 😁 That's a really good idea
  3. JKSH

    NI PCIe-5140s

    Google isn't revealing anything for me. Who is the manufacturer of this card? (Normally, the NI website hosts documentation of NI products -- even deprecated ones)
  4. That is expected. As I wrote previously, Git and Hg are very similar to each other in scope/functionality (but not in workflow details!). 5 years ago, people were saying that we should just pick one or the other and stick with it; we gained nothing from using both. Today, there is a benefit to learning Git: It gives us easier access to the plethora of code bases around the world, and it helps us move forward from incidents like Bitbucket's bombshell. You have just described Git (and SVN, according to @shoneill). The exact steps differ but the concepts are the same. Agreed. My analogy with an unsaved VI was a poor one, I realized. Unlike a power cut which is quite plausible, it is actually quite difficult to accidentally lose commits unless we ignore prompts/hints/warnings like the ones @LogMAN posted. Yes, Git could be made safer by automatically preserving "detached" branches and requiring the user to manually discard them, rather than automatically hiding them when the user moves away. I guess I've never encountered this issue in my 9 years of regular Git use because I habitually create a branch before making any new commits at an old point. This highlights the importance of running UX tests on people who aren't familiar with a product!
  5. I switched from Subversion to Git many years ago and encountered quite a steep learning curve but it was well worth it in the end -- Not having to be connected to the server all the time was a great boon. I haven't used Mercurial much, but from what I read Hg and Git were supposed to be similar to each other (at least when compared to SVN or Perforce) Yes, your choice of client has a huge impact on your experience. I find GitHub Desktop far too limiting; I like the power of SourceTree but I wouldn't recommend it to newcomers of Git -- too much power and too many options can overwhelm. Having said that, SourceTree supports Mercurial too. Perhaps @drjdpowell can use SourceTree to create and manage a Mercurial project, and then repeat the same steps for a Git project? This might help you to see the parallels between the 2 technologies and learn Git faster. Every single commit in the Git history can be checked out. If you ask git to check out Branch X, your HEAD now points to the latest commit on Branch X. If you ask git to check out Commit Y, your HEAD now points to Commit Y and is now considered "detached" (because it is not attached to a specific branch). To avoid "detached HEAD state", all you have to do is specify a branch when you check out. I have a use-case for entering detached HEAD state: Suppose I've made many commits recently and then discover a bug in my code. I want to go back to an earlier snapshot of my code, so I check out a commit that I think is good. Voila, I'm now in detached HEAD state and I can run my old code for debugging. When my HEAD is detached, I think of it as working on an anonymous/unnamed branch (a bit like how I can edit and run an unsaved VI, but if my PC loses power the VI is gone) Don't let the terminology discourage you; your journey will be worth it. Happy coding!
  6. I believe DAQmx and XNET have different timing mechanisms. This thread might contain useful clues: https://forums.ni.com/t5/Automotive-and-Embedded-Networks/XNET-Timestamp-and-Windows-Timestamp-Synchronization/td-p/3367619?profile.language=en
  7. The different letters mean nobody could confuse it with the TechCrunch logo either:
  8. Good find. I completely forgot that I posted that! πŸ˜…
  9. Even before this became available, the Hidden Gems palette would expose Split String.vi and Join Strings.vi which come bundled with LabVIEW (vi.lib\AdvancedString\) but which are not shown in the palette out-of-the-box. I'm not sure why NI created new VIs instead of exposing the Hidden Gems ones. I liked the Hidden versions better as they take less space on the block diagram. Note also that the out-of-the-box version has fewer features: The Hidden Gems version allows you to Ignore Case The OpenG version allows you to Ignore Case AND Ignore Duplicate Delimiters
  10. There are 2 separate sets of limits: Scale.Maximum/Scale.Minimum and Data Entry Limits.Maximum/Data Entry Limits.Minimum. The digital display simply shows the value stored in the Slide -- in other words, it shows what you'd see from the Slide's terminal, local variable, or Value property node. The underlying issue is that the Slide's value remains unchanged when you update the Scale limits. The Scale limits set the visible range on the GUI but they don't set the range of allowable values. To get the behaviour you want, you don't need to use a property node on the digital display but you must: Set "Respond to value outside limits" to "Coerce" instead of "Ignore" Programmatically update the Data Entry Limits
  11. Important: Make sure you sign up for a 4G service that does not use Carrier Grade NAT. If your cRIO is behind CG-NAT, then knowing its public IP won't help you. If your service gives you a unique public address, then the public IP address points directly to your modem. In this case, you're good to go with hooovah's method. If your service is under CG-NAT, then the public IP address points to your carrier's modem which is outside your control. In this case, hooovah's method won't work. Dynamic IP addresses are a fact of life now unless you're willing to pay up, or unless you obtained a static address many years ago and you've never cancelled the service since then. (Hopefully, IPv6 will solve the problem -- but it's not supported everywhere yet) I'll haven't used any of these before so I'll leave this to experienced people.
  12. There are multiple considerations: Public IP address: Your mobile carrier (or Internet service provider) assigns you a public IP address. STATIC public IP address: Be aware that this is an increasingly rare commodity. I don't know which country you live in, but I'd be very surprised if your consumer mobile carrier provides static public IP addresses anymore. You might find a commercial/enterprise provider that still sells static IP addresses, or you can use a Dynamic DNS (DDNS) service like https://www.noip.com/ -- DDNS allows you to connect to an address like neilpate.ddns.net which stays static even if your IP address is dynamic. Unique public IP address PER DEVICE: Unfortunately, if you have 1 SIM card, you will get 1 public IP address to be shared between your Windows PC and all of your cRIOs. This is the same as your home Internet: All the PCs, laptops, tablets, phones, and other smart devices that connect to your home Wi-Fi all share a single public IP address. This is Network Address Translation (NAT) in action. If you really want multiple unique public addresses, you'll need multiple SIM cards. Unique public IP address per SIM card???: Nowadays, you also need to double-check if your carrier even provides you with a unique public IP address at all! Carriers around the world have started implementing Carrier-Grade NAT (CG-NAT) for both mobile and home Internet users. This means your SIM card might share a public IP address with many other SIM cards. If this is the case, then DDNS won't work! Suppose you have 1 public IP address, and each of your devices host a web service at port 443. You can assign a unique port per device on your modem and do port forwarding as you mentioned: Dev PC --> neilpate.ddns.net:54430 (modem) --> (Windows PC) Dev PC --> neilpate.ddns.net:54431 (modem) --> (cRIO 1) Dev PC --> neilpate.ddns.net:54432 (modem) --> (cRIO 2) This means the client program on the Dev PC needs to know to use a non-standard port. You can do this easily in a web browser or a terminal emulator, but I'm not sure that LabVIEW can use a custom port to connect/deploy a cRIO. Alternative solutions You don't necessarily need a public IP address for remote access. Some modems can be configured to automatically connect to a Virtual Private Network (VPN). If you enable VPN access to your office and you ask your modem to connect to that VPN, your devices will be on the same (local) subnet as the Dev PC in your office -- we have done this for a cRIO that's deployed into the middle of a desert. If your modem doesn't support this, you could configure each device to individually connect to the VPN instead. Or, your provider might offer enterprise-level solutions that connect multiple sites to the same VPN. For example, they could offer SIM cards that provide a direct connection to your corporate VPN without the need to configure your modem or devices. Yes, these are commonly solved. The issue is that there are so many possible solutions, so you need to figure out which one works best for your use-case.
  13. My gut feeling says that a Mass Compile could make this problem go away.
  14. There's an Idea Exchange entry about this for LabVIEW CG, but it really should extend to NXG too: https://forums.ni.com/t5/NI-Package-Management-Idea/Install-the-same-package-to-multiple-versions-of-LabVIEW/idi-p/3965419
  15. If I'm not mistaken, this is a gray area because no court or judge has ever contemplated this question before. The general broad understanding is "No, it's not a strict requirement, but there are reasons to do so": https://softwareengineering.stackexchange.com/questions/125836/do-you-have-to-include-a-license-notice-with-every-source-file That's OK. It's a bit like the Ur-Quan Masters project -- The code is open-source, but not everyone can play it with the non-open-source 3DO assets unless they already own a copy: https://wiki.uqm.stack.nl/The_Ur-Quan_Masters_Technical_FAQ#How_do_I_use_the_3DO_intro_and_victory_movies_in_the_game.3F Anyway, by making your part open-source, you already make it much easier for others to achieve the object detection stuff! Here's an even shorter and blunter license: http://www.wtfpl.net/about/ (altough you might be less likely to receive a pint when someone becomes rich from your work) Note: "Public domain" has a specific meaning in copyright law, and it doesn't just mean "viewable by the public". If a work is said to be in the "public domain", that means either copyright has expired, or its authors have formally renounced their claim to copyright. As @jacobson said, a piece of code can be publicly viewable but the viewers might not have permission to incorporate the code into their own work. If you want to disclaim copyright (as opposed to using a license that says "I own this code, but you can do whatever you want with it"), see https://unlicense.org/ You can do it all in LabVIEW itself:
  16. That's what I meant by "write a bit more code" It's not a showstopper though, especially since we can put that in a VIM. Thanks for the video link.
  17. It makes me relieved that my fears were unfounded. In the beginning, I was under the impression that LV 2019 maps were like C++ maps as @smithd described, where the value type is chosen by the programmer and fixed at edit time, and no variant conversion was involved. All was fine and well. However, when I read AQ's comment ("Variant attributes and maps use the same β€” identical β€” underlying data structure.... the conversion time to/from variant for the value tends to dominate for any real application"), I misunderstood him so an uncertainty crept into my mind. I thought, "Hang on... could it be that LV maps are simply a nice wrapper around the old variant storage structure? That same structure that always stores data as variants? If so, that means maps require variant conversion which makes them less awesome than I originally thought!" The subsequent replies showed that I had nothing to worry about. Also, if I had thought it through more carefully, it would've been obvious that the LV 2019 map can't possibly be a simple wrapper around variant attributes because the old structure doesn't support non-string keys. --------------- TL;DR: I misunderstood AQ and didn't think clearly, so I got worried that LV maps had a flaw. The worry was unfounded. Maps remain awesome. -------------- Anyway, even if maps did require variant conversions, that wouldn't make maps any worse than variant attributes. The map API is a lot cleaner^ than the variant attribute API so maps would've still been the better choice. Since maps don't require variant conversions, that makes them far more awesome than variant attributes. ^One exception: I have to write a bit more code to get the list of map keys, compared to Get Variant Attribute with an unwired "name" input
  18. My apologies. I just wanted to make 101% sure that "Variant attributes and maps use the same β€” identical β€” underlying data structure" does not mean "maps store data as variants just like variant attributes". I'm now 101% sure; thanks for replying.
  19. I think that's because @Neil Pate was doing the Right Thingβ„’ by enabling "Separate Compiled Code from Source". Unseparated VIs will ask to be re-saved if opened in a newer version.
  20. Does that mean a LabVIEW map converts the data to/from variants behind the scenes, even though the datatype is fixed at edit-time?
  21. Thanks, @Aristos Queue! I'll be tuning in. Quick note about branding: The event title is currently "Intro to G Interfaces in LabView 2020" (I peeked at the event on Microsoft Teams)
  22. One feature I miss dearly in NXG is the ability to create type definitions inside classes. Want a typedef'ed enum inside a class's namespace? LabVIEW CG says "No problem", LabVIEW NXG says "No can do". For example, I could previously have multiple enums called "State" in my project because each copy is in a different class/library: ClassA.lvclass:State.ctl and ClassB.lvclass:State.ctl. However, NXG forces globally unique names for enums/clusters. * I last checked in NXG 3.0 -- perhaps the ability will exist in NXG 5.0? I think this trend started a few years ago: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Restore-High-Contrast-Icons/idi-p/3363355
  23. It's a common problem in LabVIEW. Here's one potential "fix": https://forums.ni.com/t5/LabVIEW/Building-LV8-5-application-EXE-error-1502/m-p/2387406 Another potential "fix" is to close all your Windows explorer windows, restart LabVIEW, and try again. Yet another one is to restart LabVIEW and clear your compiled object cache.
  24. Multiple instances of the same LV executable spawn multiple processes in Windows 10 (tested on 2017 SP1 32-bit), which means they (and their DLLs) have separate memory spaces even if they use the same version of the LabVIEW RTE. My test used a 3rd-party DLL which contains a global "quit()" function; calling the global "quit()" on 1 instance did not affect the other instance, which confirms the separation of memory. Other things I'd check: Does the crash occur if you only run 1 instance of your test with simulated random data? Does the crash occur if you run your multi-instance test on a different PC? Does the crash occur if you run one instance built with LV 201(x) and another instance built with LV 201(x+y) on the same PC? (Preferably with older versions of LabVIEW, before NI introduced backward-compatible LV RTEs) How does the DLL cope with invalid data? (e.g. divide by 0, Inf, NaN) Are you 100% sure that the DLL doesn't attempt any inter-process communication, network access, file access (including the temp folder), etc.?
  25. Taylorh140's result is correct. He was talking about SGL values. When you're using a 32-bit SGL, 1.4013E-45 is the smallest possible positive value. In other words, 1.4013E-45 is the next SGL value after 0. When you're using a 64-bit DBL, 4.94066E-324 is the smallest possible positive value.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.