Jump to content

LogMAN

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. By any chance, is your main VI launched in a different application context (i.e. from the tools menu or custom context)? Here is an example where the dialog is called from the tools menu (top) while the front panel is open in standard context (bottom).
  2. Good question. There are different kinds of prototypes that serve different purposes. The one I'm referring to is a throwaway prototype. It serves as a training ground to try out a new architecture and/or refine requirements early in the process. The entire point of this kind of prototype is to build it fast, learn from it, then throw it away and build a better version with the knowledge gained.
  3. Even without porting to a different language, a small prototype is a good idea if you aren't sure about your architecture. That way you can test your ideas early in the project and refine it before you begin the actual project. Just don't forget to throw away the prototype before it gets too useful 😉
  4. I had to do that once. Moved from LV to C++ A translation doesn't make sense as many concepts don't transfer well between languages, so it was rewritten from scratch. It's actually funny to see how simple things become complex and vice versa as you try to map your architecture to a different language. I assume by "done" you mean feature parity. We approached it as any other software project. First build a MVP and then iterate with new features and bug fixes. We also provided an upgrade path to ensure that customers could upgrade seamlessly (with minimal changes) and made feature parity a high priority. The end result is naturally less buggy as we didn't bother to rewrite bugs 😉 It certainly took a while to get all features implemented but reliability was never a concern. We made our test cases as thorough as possible to ensure that it performs at least as well as the previous software. There is no point in rewriting software if the end result is the same (or even worse) than its predecessor. That would just be a waste of money and developer time.
  5. Perhaps consider using separate Hardware Configuration classes instead of JSON strings. That way your Hardware classes are independent of the configuration storage format (which may change in the future). All of your Hardware Configuration classes could inherit from a base class that is then cast by each Hardware at runtime. Sounds reasonable. From what you describe, it sounds more like a registry than a manager. Could you imagine using a Map instead? Your hardware manager also has a lot of responsibilities. First of all, it should not be responsible for creating classes. This should be responsibility of a factory (Hardware Factory). If you want to load classes on-demand, then the factory should be passes to the manager. Otherwise, the factory should create all classes once on startup and pass the instances to the manager. It also sounds as if the Hardware Factory should receive the configuration data. In this case, the configuration data could be a separate class or a simple cluster. In either case, the factory should not be responsible for loading the data (for the same reason as for the Hardware above). A proxy could be useful here (a class that forwards calls to another class). In this case, "Kiethley DMM with a built-in switch card" could be passed directly to one of your operations. In the the case of "Kiethley DMM and a NI switch card", however, a proxy could hold the specific hardware instances and forward all calls to the appropriate hardware. You can load a class from disk and cast it to a specific type. See Factory pattern - LabVIEW Wiki for more details.
  6. I suspect they did another one of their "approaches" and hired more consultants to completely miss the point... This is really sad and I fail to see how any of this makes LabVIEW a better product and not just more expensive to their current user base. Responses like this are also a good reason to seek alternatives. NI has made it clear for quite some time that LabVIEW is only an afterthought to their vision. Instead they are building new products to replace the need for LabVIEW ("it's not the only tool"). Customers will eventually use those products over writing their own solutions in LabVIEW, which means more business for NI and a weak argument for LabVIEW. In my opinion, higher prices are also a result of balancing cross-subsidization. In the past, other products likely added to the funds for LabVIEW development in order to drive business. With more and more products replacing the need for LabVIEW, these funds are no longer available. Eventually, when there are not enough customers to fund development, they will pull the plug and sunset the product. On the bright side, they might gain a large enough user base to invest in the long-term development of LabVIEW. They might listen to the needs of their users and improve its strengths and get rid if its weaknesses. They might make it a product that many engineers are looking forward to use and who can't await the next major release to engineer ambitiously I hope for the latter and prepare for the former.
  7. Here is a video that showcases a logic designed by NI. It counts the number of iterations since the last state change and triggers when a threshold is reached.
  8. My system is to put them in project libraries or classes and change the scope to private if possible. Anything private that only serves as a wrapper can then easily be replaced by its content (for example, private accessors that only bundle/unbundle the private data cluster). Classes that expose all elements of their private data cluster can also be refactored into project libraries with simple clusters, which gets rid of all the accessors. Last but not least, VI Analyzer and the 'Find Items with No Callers' option are very useful to detect unused and dead code. Especially after refactoring. What do you mean by "very little"? The 'Add' function does very little but it is very useful. If you have lots of VIs like that, your code should be very readable no matter the number of VIs.
  9. You can find a lot of information on the type descriptor help page. The LabVIEW Wiki also has a comprehensive list of all known type and refnum descriptors. Feel free to add more details as you discover them 🙂 That said, I would use Get Type Information over type string analysis whenever possible. It's much easier and less error prone. A great example of this is the JSONtext library, which utilizes the type palette a lot.
  10. The answer is in the title. NI will stop selling perpetual licenses by the end of this year. Any licenses renewed before that date will continue until they expire, after which NI will offer subscription-based licenses.
  11. Well, technically speaking that should be the case if you take their answer literally (and ignore the rest of the sentence) Most likely, though, it will allow you to use any version listed on the downloads page, which currently goes back to LV2009. You might also be able to activate earlier versions if you still have access to the installer but I'd be surprise if that went back further than perhaps 8.0. Only NI can tell.
  12. It looks like the certificate was renewed today. According to the certificate, it is valid from 13/Dec/2021 to 13/Jan/2023. Have you tried clearing your cache? You can clear the cache in most browsers using the key combination <Ctrl> + <F5>. Hope that helps.
  13. Here is a similar post from the CR thread. The reason for this behavior are explained in the post after that. JSONtext essentially transforms the data included in your variant, not the variant itself. So when you transform your array into JSON, the variant contains name and type information. But this information doesn't exist when you go the other way around (you could argue that the name exists, but there is no type information). The variant provided to the From JSON Text function contains essentially a nameless element of type Void. JSONtext has no way to transform the text into anything useful. To my knowledge there is no solution to this. The only idea I have is to read the values as strings, convert them into their respective types and cast them to variant manually.
  14. I don't have any experience with Macs, but there is a topic in the LabVIEW 2021 Beta Forum related to Apple M1 chips: [New Feature] macOS Big Sur Support - NI Community
  15. You can use the shortcut menu on a property node to select the .NET class. The one you are looking for is in the mscorlib assembly. System.Environment.vi
  16. Your issue could be related to these topics (I assume "glaring front issue" means blurry): Texts in Icon Editor Get Blurry - National Instruments
  17. The best way is to report posts via the three dots in the upper right corner. That way moderators get notified.
  18. This is expected behavior: Ensure That Event Structures Handle Events whenever Events Occur - LabVIEW 2018 Help - National Instruments (ni.com) You are probably right about compiler optimization for unreachable code. Changing the compiler optimization level most likely has no effect because it is still unreachable and therefore not included.
  19. Here is what happens: Scalar JSON text to Variant.vi uses the index output of the Search 1D Array function (I32) for the Enum value (U16). JSON to Map.vi then uses Variant To Flattened String (special).vi to extract the data. Enum U16 has 2 Bytes but I32 has 4 Bytes of data, so the Map data gets offset by 2 Bytes for each Key. Scalar JSON text to Variant.vi uses the output of Get U32.vi for all unsigned integers. JSON to Map.vi then uses Variant To Flattened String (special).vi to extract the data. U16 has 2 Bytes but U32 has 4 Bytes of data, so the Map data gets offset by 2 Bytes for each value. The solution is to cast all values according to their respective types. Here are the two offending sections for your particular case:
  20. Thanks for sharing and kudos to everyone involved! I did a few tests with a dummy project and it works like a charm. The fact that it provides all the tooling to automate the process is just mind-blowing. It looks like a very powerful tool to save and restore data, with precise control over data migration and versioning. Also, BlueVariantView - very insightful and handy. Have you ever explored the possibility of manipulating mutation history (i.e. removing older versions)? That could be useful for users with slow editor experience: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000015BLnSAM&l=en-US Fair enough. There are always tradeoffs to tools like this. There is one point I would like to address: "Pretty serious security risk. There is no way to protect private data when serializing/de-serializing". Scope is not the same as security. If you want to secure data, encrypt it. If you want to serialize only some data, but not all, this is not a security risk but an architectural challenge. You'll find that the same issue exists for the Flatten To XML function. It requires designing classes such that they include only data you want to serialize. A typical approach is to have separate data objects with no business logic. One point missing in this list is the lack of control over data migration and versioning. My library entirely depends on the ability of JSONtext to de-serialize from JSON. It will fail for breaking changes (i.e. changing the type of an element). Your library provides the means to solve this issue, but it has it's own limitations: There are two points users need to be aware of: When de-serializing, Serialized Text must include "Class" and "Version" data, which makes it difficult to interface with external systems (i.e. RESTful APIs). Classes must inherit from BlueSerializable.lvclass. This could interfere with other frameworks or APIs. What makes your library special is the way it handles version changes. We have been doing something similar with regular project libraries, using disconnected typedefs to migrate data between versions. Being able to do the same thing with our objects is very appealing (up to now we have considered them breaking changes and lived with it). I'll certainly bring this up in our next team meeting. Very cool project. Thanks again for sharing!
  21. I should have put a smiley at the end of my sentence 😅
  22. A few years ago I worked on a project that required private API keys. To make things easier, I simply created a private branch that had my keys hardcoded. This is one example of a branch that I certainly didn't want to push to a public server. At the same time it allowed me to regularly merge from master at no cost. You could create a custom git hook to push local changes on every commit but you are probably better of with a different VCS.
  23. The common denominator in this case are the commits, each of which is uniquely identifiable by its commit hash. In ".git/objects" you'll find all commits. When you push or pull a repository, these are the objects that get exchanged. A branch simply points to one of these commits. If you know the commit id, you can create a branch for it. In ".git/refs/heads" you'll find all local branches. In ".git/refs/remotes/<remote>/" you'll find all remote branches (execute 'git fetch <remote>' to update the list). Finally, there is a file ".git/config" that specifies which local branch tracks which remote branch. In this example, local branch "main" tracks remote branch "origin/main": [branch "main"] remote = origin merge = refs/heads/main As you already discovered, it is possible to have different names for local and remote branches. This, however, is typically considered bad practice unless you have multiple remotes with the same branch name, in which case a typical approach is to prefix the local branch with the name of the remote (i.e. "origin_main", "coworker_main", ...). A general rule of thumb is to avoid this situation whenever possible. If a local branch does not track a remote branch, it won't get pushed to the remote. This is typically used for private feature/test/throwaway branches. Pretty useful in my opinion, but it also took me a while to wrap my head around this. Of course you can always push your local branches to the remote with 'git push -u <remote> <branch-name>'.
  24. When you add a commit to your local branch, it advances the branch pointer to that commit. The remote branch is not affected. Only when you push your changes to the remote does it forward the branch to the same commit. Here is an example, where a local branch (orange) is ahead of a remote branch (blue) by two commits. Pushing these changes will forward the remote branch to the same commit. Of course this also works the other way around, in which case you need to pull changes from the remote repository.
  25. I too use InnoSetup to extract and install various products. Regarding the installer, I'm not sure if you are aware of this, but you can get offline installers from the downloads page. My installer also does a silent installation of NIPM and various packages. The details are explained in these KB articles: Automating the Unattended Installation of NI Package Manager (NIPM) - National Instruments How Can I Control NI Package Manager Through the Command Line? - National Instruments If I remember correctly, the parameters described in the first article should also work for other offline installers. Nevermind, here is the description for individual offline installers: Automating an Installer - NI Package Manager 20.7 Manual - National Instruments
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.