Jump to content

LogMAN

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. I agree that git in its entirety has become very complex. So much even, that they literally have to divide commands into categories... 🤦‍♂️ Talk about feature creep... However, a typical git user needs maybe 10 commands regularly and perhaps the same amount occasionally for specific tasks (like recovering from a detached head) and edge cases (like rewriting history). I found it actually much easier to just teach those few commands, than to increase the learning curve by adding another tool on top of that. Don't get me wrong, UI tools are very useful - specifically for users that are entirely new to the idea of VCS. Anyone familiar with concepts like branching, merging, and wandering the history of a project, however, should at least consider (and perhaps try for a few days) to work with the command line interface. It's just text (although the stories it tells are sometimes a bit scary) 😄 Haven't heard of it, but it is certainly something worth looking into. Thanks for sharing!
  2. A typical solution for this is to regularly exchange a "heartbeat" message between your top-level VI and running clone(s). That way, if either the top-level VI or clone doesn't respond within an expected timeframe (i.e. 60 seconds), the system can react accordingly (i.e. spawn a new clone, report an error, or - in case of the clone - shut itself down).
  3. Here are some more potential flags: Private function calls (anything offered by "SuperSecretPrivateSpecialStuff") Undocumented function calls (anything from <labview> that is not on the palette, except those hidden gems) Functions that access files in the user's scope (desktop, documents, etc.) Although the question is about malicious LabVIEW code, there are other points to consider: Any form of harassment, racism, etc. as part of the codebase (file names, free labels, VI documentation, error messages, etc.) Non-LabVIEW files like pictures, videos, presentations and others, which may contain harmful content. For example, macros in office files. In my opinion, the likelihood of malicious LabVIEW code is far smaller than malicious office documents. This might change with the availability of the community edition, but since LV is very niche, there is not much to gain. (unless there is a chance that malicious code gets used by a company like NI or NASA...).
  4. Here is my second part including some doubts. Please note that I make a few assumptions about the nature of this idea, so take it with a grain of salt. This will immediately flag all existing (non-malicious) packages as malicious, because each one will fail at least one of those checks. Just try running those checks on the OpenG packages... Also, most of those points only indicate potential technologies with which one could build malicious software. They are certainly not indicators of malicious software on their own. Not just that, but it also limits the options for the kind of licenses one can choose for their package. In fact, only an open source license is able to pass the checks (no password protected VIs + no removed block diagrams). While I like open source as much as the next developer, this will prevent businesses from providing licensed solutions via packages. In my opinion this is a bit too restrictive. I'm no security expert, but malicious code is generally detected during execution. Static code analysis is simply not smart enough to detect nuances in execution behavior. There is also no 100% guarantee that malicious code is triggered during code execution, which is why each users is responsible for verifying code that they downloaded from the internet (sandboxing). We are developers. As such it is our responsibility to take care of every tool we use for our work. This includes third-party packages from "unknown" sources or even package vendors. There are of course a few things that the package vendor could (should) do to help identify the origin of a package. For example, I want to be sure that the OpenG packages are actually from the OpenG community and not from someone random. This is why packages typically include information about their origin and are tied to a specific username. For example, the OpenG library (package) could belong to the OpenG account: "gcentral.org/packages/openg/openg-library". If you want to go one step further, have package owners sign their packages (i.e. PGP). For trusted package owners, GCentral could sign their keys to build a "web of trust". That way, if I trust GCentral, perhaps I can also trust the one that GCentral trusts... Regarding malicious code, I'd only expect GCentral to verify that packages don't include viruses (use your average anti-virus software). The rest of it is my responsibility. I am responsible for the code I download from the internet. GCentral should certainly not aim to take responsibility for that. My recommendation is to not have any kind of "no malicious code detected" tag on packages, because it will give developers a false sense of security. A "package from verified source" tag, however, could be worth looking into.
  5. I have two responses to this. Let me start by contributing further suggestions. No invisible code - that is hidden code inside structures that have auto-grow disabled. No PPLs No binaries (DLLs, executables, ZIP files, etc.) - except if the source code of the DLL is included or can be obtained via open source channels attributed in the package. No unlicensed packages - a reusable package without license is worthless. No broken/missing VIs - bad code is not reusable. No viruses, etc. - just use your average anti-virus protection to verify each package. We don't want to unpack a malicious ZIP file or worse... Perhaps this can be done by a test that checks for #vian_ignore labels in all VIs.
  6. Welcome to Lava! There is no way to intercept WaitForNextEvent, other than generating the event it is waiting for or the timeout occurring. It is, however, possible to handle the event directly in LabVIEW. Are you familiar with .NET Event Callbacks? Use the Register Event Callback function to execute a callback VI to generate a user event for an Event Structure. The event structure can then take care of the timeout as well as other events that may end execution prematurely. Here is an example. For simplicity reasons I didn't include error handling and the callback VI, but you should be able to work the rest out from here. Disclaimer: I haven't tested this, but I'm certain it will do what you want.
  7. What you describe sounds very similar to our situation, except that we only have a single top-level repository for all stations. If you look at a single station repository of yours, however, the structure is almost the same. There is a single top-level repository (station) which depends on code from multiple components, each of which may depend on other libraries (and so forth). * Station + Component A + Library X + Library Y + Component B + Library X + Library Z + ... In our case, each component has its own development cycle and only stable code is pulled in the top-level repository. In your case there might be multiple branches for different stations, each of which I imagine will eventually be merged into their respective master branch and pulled by other stations. * Station (master) + Component A (master) + Library X (dev-station) + Library Y (master) + Component B (dev-station) + Library X (dev-station) + Library Z (master) + ... In my opinion you should avoid linking development branches in top-level repositories at all costs. Stations should either point to master (for components that are under development) or a tag. * Station A (released) + Component A (tag: 1.0.0) + Component B (tag: 3.4.7) * Station B (in development) + Component A (tag. 1.2.0) + Component B (master) <-- under development * Station C (released) + Component A (tag: 2.4.1) + Component B (tag: 0.1.0) Not sure if I misunderstand your comment, but you don't actually have to branch a submodule. In fact, anyone could simply commit to master if they wanted to (and even force-push *sight*). Please also keep in mind that submodules will greatly impact the git workflow and considerably increase the complexity of the entire repository structure. Especially if you have submodules inside submodules... In my opinion there are only two reasons for using submodules: To switch branches often (i.e. to test different branches of a component at station level). To change code of a component from within the station repository. Both are strong indicators of tightly coupled code and should therefore be avoided. We decided to use subtrees instead. For every action on a subtree (pull, change branch, revert, etc.) there is a corresponding commit in the repo. We have a policy that changes to a component is done at component level first and later pulled into the top-level repository. Since the actual code of a subtrees is included in the repository, there is no overhead for subtrees that include subtrees and things like automated tests also work the same as for regular repositories. You have the right intention, but if any developer is allowed to make any changes to any component, there will eventually be lots of tightly coupled rogue branches in every component, which is even worse that the current state. Not to forget that you also need to make sure that changes to a submodule are actually pushed. This is where UI tools become handy as they provide features like pushing changes for all submodules when pushing the top-level repository (IIRC Sourcetree had a feature like that). To be fair, subtrees don't prevent developers from doing those changes. However, since the code is contained in the top-level repository, it becomes responsibility of the station owner instead of the component owner. In my experience it's a good idea to assign a lead developer to each component, so that every change is verified by single (or a group of) maintainer(s). In theory there should only be a single branch with the latest version of the component (typically master). Users may either pull directly from master, or use a specific tag. You don't want rogue branches that are tightly coupled to a single station at component level.
  8. You don't have to do a silent installation. If you follow the instructions, it will create a .spec file that you can customize (it's just a text file), so that it behaves like the standard installer but with different default values. It will only do a silent installation when you use the "/q" option. Omit that flag to do a regular installation.
  9. Welcome to LAVA! You can customize the installer with a spec file. If I remember correctly that also allows you to specify the installation directory. Here is a KB article for how to create the spec file (scroll down to "customizing installation"): https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Ld6SAE&l=en-US
  10. I use git submodules. Do yourself a favor and don't use them. There are many pitfalls, here are just a few examples: If a submodule is placed in a folder that previously existed in the repository (perhaps the one you moved into the submodule), checking out a commit from before the submodule existed will cause an error because "files would be overwritten". So you'd have to first delete the submodule and then check out the old commit. Of course, after you are finished you'd have to checkout master again and update your submodule to get it back to the current version. rm submodule/ -rf git checkout <old-commit> ... git checkout master git submodule update This is hard to remember and annoying to use. Not to mention the difficulties for novice git users and the lack of support by most UI tools. Everyone on the team needs to remember to occasionally do git submodule update to pull the correct commit for all submodules. Just imagine shipping an application with outdated code because you forgot to run this command... Branching submodules while branching the main repository, or as I call it: "when you spread the love". Imagine this situation: Colleague A creates a branch in the main repository, let's call it "branch_a". Now they also create a branch on the submodule, let's call it "submodule_branch_a". Of course they'll update the submodule in the main repository to follow their branch. Then they begin changing code and fixing bugs. In the meantime, colleague B works on master (because why not). Of course, master uses the master branch of the submodule. So they add some exciting new feature to the submodule and update the commit in the main repository. All of this happens in parallel to colleague A. Eventually colleague A will finish their work, which gets merged into master. Unfortunately, things aren't that easy... Although "branch_a" followed "submodule_branch_a", it will not do so after merging to master. Because colleague B changed the commit of the submodule in parallel to colleague A, whichever change happened last will be merged to master. This is an exciting game, where you spread the love, flip a coin and hope that you are the chosen one. I actually changed our main repository (shared among multiple developers) to submodules a few years ago, only to realize that they introduce problems that we simply couldn't fix. So I had to pull the plug, delete all recent changes and go back to the state before submodules. That was a fun week... That said, we now use subtrees. A subtree is somewhat similar to a submodule in that it allows you to maintain code in separate repositories. However, a subtree doesn't just store the commit hash but actually pulls the subtree into your repository, as if it was part of that repository in the first place (if you want, even with the entire history). With a subtree, you can simply clone the main repository and everything is just there. No additional commands. However, you'd want to avoid changing any files that belong to submodules, so that you essentially only pull subtrees and never push (or merge). I simply put them in a "please don't change" directory (anyone who changes files in that directory will have to revert and do their work over). Atlassian has a nice article on git subtrees if you are interested: https://www.atlassian.com/git/tutorials/git-subtree
  11. I agree, the window is most likely built-in so there is no way to change it. Not sure if this is useful, but there is a way to customize the behavior of the override retooler: https://forums.ni.com/t5/LabVIEW/Use-of-MUST-OVERRIDE/m-p/3286047/highlight/true?profile.language=en#M960301 This could be useful if you want to change the general outcome of the override operation (i.e. add an error case, remove the Call Parent function, etc.).
  12. Here are my points: By default it should list about 15 to 20 of the most recently updated packages. That way even new packages get promoted and it doesn't feel "static". I want to select between a few high level categories (i.e. Frameworks, Utilities, Drivers). I want to specify the version of LV that the package should support (it should take older package versions into account). Each package should provide some key facts: Name, version, author, summary, picture, rating, price, download count, download button. I want to open the details page if I find a package that I might like. I want to scroll the page, not a frame inside the page. In my opinion there is no "right" way to browse for packages, so it should rather provide options that users are willing to use and should make some educated guesses on what default settings and key facts are important (maybe do UATs?). Since there are already a few links to our favorite pages, here is one of mine. It is for a game, but I think the "card" design could work for GCentral as well: https://mods.factorio.com/
  13. Here it is, CAR #1103428 I should probably mention that this is expected to be fixed in LV2020 SP1. It doesn't sound like there will be a fix for LV2019, unfortunately.
  14. I just saved from 2019 to 2015 without any problems, including Error Ring. As @Yair suggested, perhaps mass compiling vi.lib fixes it?
  15. Here is another bug I discovered in LV2019+ which reliably crashes LV. Simply connect a set to the map input of the Map Get / Replace Value node on the IPE: This VI is executable in LV2019 SP1 f3, which means that you can actually build that into an executable without noticing. I have reported this, but there is no CAR yet.
  16. I want to share a bug that exists in LV2019+ that results in data being overwritten in independent wire branches when changing maps with the Map Get / Replace Value node of the IPE. Here is the code: This should produce two maps <"key", 0> <"key", 1> But the actual output is <"key", 1> <"key", 1> As it turns out, Map Get / Replace Value breaks dataflow. There are two workarounds for this issue. Use the Always Copy function Wire the Action terminal of the write node This was reported to NI with the bug report number #7812505.
  17. I agree with @hooovahh. This is likely an issue with Parallels. I don't have a VM to try this, but I assume that your cursor leaves the VM for a brief moment (the "stuttering"), which causes this strange behavior. Parallels probably buffers the cursor info and restores it in a way that is incompatible with LV. Does Parallels allow you to "capture" the cursor, so that it can't leave the VM without pressing some master key? I can imagine this fixes the issue.
  18. I have never seen such behavior in any version of LabVIEW. That said, I use 32 bit. Perhaps it's specific to 64 bit? If Windows scaling is enabled, try disabling it. I had strange behavior because of that in the past.
  19. Sounds like the compiler is busy figuring things out. Just tested it on the QMH template and it is as fast as always (1-2 seconds). That said, the template is not very complex. Do you get better results with a simple project like that?
  20. Whichever system you choose will require a lot of time and effort to implement by the whole team. Don't think that you can simply choose a tool that "feels right" and expect everyone to accept your opinion. I made that mistake and had to rethink my strategy after a disaster of a start (only one of us adopted the new system - me ). You also don't want to be responsible for everything, so it's best to have the majority on your side before you enforce a new system. Your main focus should be on your workflow and your requirements to decide which tools are right for you. For example, do you have small projects or big? how many developers work on a single project? are your projects deep (multiple projects in separate repositories), wide (multiple projects managed in a single repository), or flat (single repository)? do you always have access to your servers when developing? do you have a project leader, or is each project managed by a single head-developer? who are your stakeholders and which kind of reports, documents, and access rights to they need? how are bugs reported to your team? do you need to give access to your source code and bug trackers to third-parties (external)? what kind of development cycle is used for your projects? do you need to integrate additional platforms or tools (i.e. CRM)? While it is kind of easy to state those questions, the answers are not that simple to give. I still struggle to answer some of those (among other) questions and simply made gut-decisions where necessary. Also, don't forget to include your team in this! Simple questions like "Where do you struggle the most?" do wonders if you care to listen. Here is our tooling for reference: We use Git for source control, Jira for bug tracking, planning, and reporting, and Confluence for the wiki. Our developers are free to use any client they want (i.e. Sourcetree), as long as they adhere to the commit guidelines. We still don't have a working CI/CD solution, so each developer builds their own project (or in case of larger projects, the head-developer). Unfortunately LV is exceptionally slow in this area, especially for large projects. We are actually at the edge of what we consider acceptable. In the past we used ZIP files, later SVN. I don't think I need to explain why we went away from ZIP files. SVN, however, we discarded because it wasn't the right fit. Occasionally we have to change programs on-site, without server access, and want to be able to commit changes. SVN simply is too much effort for such a workflow. You need to remember to checkout and lock files before you travel and hope that nobody breaks the lock. Lessons learned from using SVN: Don't trust your colleagues 😱
  21. Nice, that message did not exist in the version of Sourcetree I used a few years ago. Glad to know they keep improving. Still, some genius decided to make "Yes" the default answer...
  22. Let's also not forget that Git was originally invented by Linus Torvalds to manage the Linux kernel with thousands of contributors, commits, and branches being done every day. Although my projects are slightly below that complexity, Git does manage (barely) This is the fault of tools like SourceTree. Here is an example of Git moving away from a detached head: Again, they provide you with the necessary command template to prevent you from loosing track of your changes. I cannot stress enough how much better the command line interface is compared to any of the tools (at least the ones I have tried).
  23. Atlassian has comprehensive (and visual appealing) documentation on Git. https://www.atlassian.com/git/tutorials https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Unfortunately Git has a lot of commands, most of which you'll probably never use. I suggest you play around with it to get used to the workflow and be prepared for your daily job. Just don't do stuff like "git push -f" and you should be fine. You might find this worth reading: https://sqlite.org/whynotgit.html There you have it, your problem is not Git, it's the tools. Have you ever tried the command line? In my experience, there are many tools that work well with some aspects, but fail miserably at others. They only get you so far before you need some command line Voodoo to fix the "unfixable". Eventually you'll get used to the command line 🤷‍♂️ (I lost this battle a long time ago) This is the same reason why I abandoned SourceTree a few years back. Tools like that only give you access to a subset of instructions and they don't even provide ways to undo bad decisions. To be fair, you do actually get a big warning message (which nobody seems to read, including me): The command line version of this is much more helpful in my opinion, as it even teaches you some new commands. There is also no way to mistake it with the normal output.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.