Jump to content

LogMAN

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by LogMAN

  1. There is a VI in OpenG LabVIEW Data Library that does this for you. I took this as a challenge and added two VIs to my library on GitHub - https://github.com/LogMANOriginal/LabVIEW-Composition Decompose Map extracts variant keys and values of variant maps Decompose Set extracts variant elements of variant sets I have successfully tested these VIs with various different types, but there could still be bugs. Let me know if you find anything. I strongly discourage using these in production!
  2. The library is now available on GitHub (including test cases) https://github.com/LogMANOriginal/LabVIEW-Composition I also discovered this project, which provides some useful methods to work with variant data.
  3. It's a separate library. Object composition was actually much more difficult to figure out than the other way around. I have attached the library for LV2017 (without test suites and package configuration). I'll also put this on GitHub in the near future. Here is an example that overwrites elements in the private data cluster (the outer IPE addresses the class hierarchy). Here is an example that uses JSONtext to extract data from a private data cluster. I was looking into this particular case as a way to transition from clusters to/from objects Both examples are included in the package. Object Decomposition LV2017.zip
  4. This was meant as a proof of concept to see if it can be done and if it's something worth investigating. I should probably mention that this branch has a few bugs that I haven't fixed yet. Certainly not something I would use in production right now but still, I believe there is some value in this - especially for general-purpose libraries like JSONtext. Anyway, I'll back save and upload when I have access to LV. By the way, the details are explained on the Wiki: LabVIEW Object - LabVIEW Wiki I haven't found a better way to do this without adding (or scripting) methods to every class. The only function that currently breaks encapsulation natively is Flatten To XML, which has its own limitations.
  5. If you are interested in object (de-)composition, I have a library that does most of the heavy-lifting (extracting private data and putting it back together), including functions to work with clusters of any size and shape. I can save it back to LV2017 and post here.
  6. Only the name is deleted, commits are left untouched. It is actually possible to restore the branch name if you know the commit hash - https://stackoverflow.com/a/2816728 This can be useful if you deleted a branch before it was merged into master, or if you want to branch off a specific commit in the history that is currently unlabeled. Here is some documentation from Atlassian, generally applicable to GitHub as well: Git Branch | Atlassian Git Tutorial Pull Requests | Atlassian Git Tutorial
  7. The Network Graph mentioned by @JKSH does give you some visualization on GitHub. I personally prefer the visualization in Sourcetree and bash. Here is an example for GitHub - microsoft/vscode: Visual Studio Code The command I use is git log --oneline --graph You can see that branches still exist even after merging. Only the name of the branch, which is just a fast way to address a specific commit hash, is lost (although it is typically mentioned in the commit message). That said, some branches can be merged without an explicit merge commit. This is called "fast-forward" - https://stackoverflow.com/a/29673993. Maintainers on GitHub can decide if they always want a merge commit, or not.
  8. Here is some information about this error: VISA Error -1073807339 (0xbfff0015) Timeout Expired Before Operation Completed - National Instruments (ni.com) There could be many reasons for a timeout error. The error message only indicates that a timeout occurred before a reply was received, which is not very useful. NI IO Trace might give you some additional clues. Maybe put the master in a shift-register on your while loop. Not sure if that makes a difference. This is specified in the Modbus Application Protocol, although implementations vary between 1-based and 0-based. The mapping of addresses is typically resolved internally.
  9. You got it right. "Delete branch" will delete the branch on your fork. It does not affect the clone on your computer. The idea is that every pull request has its own branch, which, once merged into master, can safely be deleted. This can indeed be confusing if you are used to centralized VCSs. In Git, any repository can be remote. When you clone a Git repository, the source becomes remote to the clone. It doesn't matter if the remote is on your computer or on another server. You can even have multiple remote repositories if you wanted to. You'll notice that the clone - by default - only includes the master branch. Git allows you to pull other branches if you want, but that is not mandatory. Likewise, you can have as many branches of your own without having to push them to the remote (sometimes you can't because they are read-only). On GitHub, when you fork a project, the original project becomes remote to your fork (you could even fork a fork if you wanted to...). When you clone the fork, the fork becomes remote to your clone. When you add a branch, you can push it to your fork (because you have write-access). Then you can go to GitHub and open a pull request to ask the maintainer(s) of the original project to merge your changes (because you don't have write-access). Once merged, you can delete the branch from your fork, because the changes are now part of master in the original project (there is no reason to keep it). Notice that the master branch on your fork is now behind master of the original project (because your branch got merged). Notice also that this doesn't affect your local clone (you have to delete the branch manually). You can now update your fork on GitHub, pull from your fork, and finally delete the local branch (Git will warn you about deleting branches that have not been merged into master). There is a page which describes the general GitHub workflow: Understanding the GitHub flow ยท GitHub Guides Hope that helps.
  10. For starters, there are a few DWarns: c:\nimble\penguin\labview\components\mgcore\trunk\18.0\source\ThEvent.cpp(216) : DWarn 0xECE53844: DestroyPlatformEvent failed with MgErr 42. e:\builds\penguin\labview\branches\2018\dev\source\typedesc\TDTableCompatibilityHack.cpp(829) : DWarn 0xA0314B81: Accessing invalid index: 700 e:\builds\penguin\labview\branches\2018\dev\source\objmgr\OMLVClasses.cpp(2254) : DWarn 0x7E77990E: OMLVParam::OMLVParam: invalid datatype for "Build IGL" e:\builds\penguin\labview\branches\2018\dev\source\typedesc\TypeManagerObjects.cpp(818) : DWarn 0x43305D39: chgtosrc on null! VI = [VI "LSD_Example VI.vi" (0x396f46b8)] e:\builds\penguin\labview\branches\2018\dev\source\UDClass\OMUDClassMutation.cpp(1837) : DWarn 0xEFBFD9AB: Disposing OMUDClass definition [LinkIdentity "StatusHistory.lvclass" [ Poste de travail] even though 5 inflated data instances still reference it. e:\builds\penguin\labview\branches\2018\dev\source\UDClass\OMUDClassMutation.cpp(1837) : DWarn 0xEFBFD9AB: Disposing OMUDClass definition [LinkIdentity "Delacor_lib_QMH_Message Queue V2.lvclass" [ Poste de travail] even though 1 inflated data instances still reference it. This will almost certainly cause a crash next time we operate on one o Here is some information regarding the differences between DWarns and DAborts: I'd assume that one of the plugin VIs or classes is broken. You can try and clear the compiled object cache to see if that fixes it. Alternatively uninstall each plugin until the issue disappears. (start with LVOOP Assistant, I remember having issues with it in LV2015).
  11. LogMAN

    Dear NI

    It could be open source and still be maintained by NI, as long as they have a way to generate revenue. There is also great potential in the NXG platform, which - as far as I know - is written in C#. Even if LabVIEW is not of interest to millions of people, keep in mind that most open source projects only receive contributions from a small portion of their users. The Linux kernel is probably not a good comparison, because it is orders of magnitudes more complex than LabVIEW. Nevertheless, Linux "only" received contributions from approx. 16k developers between 2005 and 2017 - 2017 Linux Kernel Report Highlights Developersโ€™ Roles and Accelerating Pace of Change - Linux Foundation. Compare that to relatively young projects as Visual Studio Code (~1400 contributors), or the .NET Platform (~650 contributors). These are projects with millions of users, but (relatively speaking) few contributors. It depends. Companies might be willing to pay developers to fix issues. Enthusiasts might just dive into the code and open a pull-request with their solution. Some items might not be of particular importance to anyone, so they are just forgotten.
  12. Good selection by @Mefistotelis Try to figure out what motivates them (games, machines, information, ...) and help them find the right resources. Try different things, perhaps something sticks. If not, move on to the next. Here are two links that can get you started with python in a few minutes. Take your first steps with Python - Learn | Microsoft Docs Python Getting Started (w3schools.com)
  13. Not sure where you got that. It's a valid approach: Command pattern - LabVIEW Wiki The Actor Framework, for example, takes this idea to the extreme. I'm not a fan of the 0ms timeout case because it adds unnecessary polling. The rest sounds good to me. It is probably best if you build a prototype to see what works best for you.
  14. You use some patterns in the wrong context. The Factory pattern is only a means to create new objects without explicitly referencing them. Here is a very simple example using a Boolean to select from two classes. In this example, Class 1 and Class 2 both are children of Superclass. Depending on the input value, one of them will be returned. The caller doesn't need to know how to create those objects and only sets the input value to True or False. A typical implementation uses a String instead of a Boolean, which allows for many more options and adding more classes later. In any case, the output of a Factory is the most common ancestor of all possible children (Superclass in this example). Dynamic dispatching is not something that magically merges different functions into one. It is only a way to change the behavior of a function at runtime. Perhaps you are familiar with Polymorphic VIs. Dynamic dispatching is actually very similar to polymorphism. The difference is how they are resolved. Polymorphic VIs are resolved at edit time. Dynamic dispatch VIs on the other hand are resolved at runtime. This is why dynamic dispatch VIs must always have the same terminal pattern. This is of course a very simplified explanation. For your particular question, there are two parts to it: Create specific subclasses for each type of power supply - For example, XNET and DAQmx are entirely different technologies, so it makes sense to have separate classes for each. Use the Strategy Pattern to change the behavior of a particular method - For example, your XNET class could use different strategies to do the actual read operation (XY, double, etc.). The Strategy Pattern encapsulates an operation in an object. You need to write one object for every possible strategy and then provide the desired strategy at runtime (i.e. using a Factory). Here is a basic example. In this example, XNET is a subclass of Power Supply, which has a Read Data method that returns an array of double. When XNET is created, the desired read strategy is passed as an argument. The strategy has another dynamic dispatch method to do the actual read operation. The Read Data method then uses the strategy to read the data. DAQmx would work similarly, perhaps with its own set of strategies. I believe that this comes very close to what you have in mind. Don't put events and queues inside the reader class. Instead, have a separate class that uses the reader class to produce events or populate queues (these should be separate classes altogether, one for queues and one for events). I suggest you play around with the different patterns to get used to them before you use them in production. OOP can get confusing very quickly if you are only used to functional programming.
  15. Glad to hear it worked out well for you. I wish I had this confidence 10 years ago... I agree. Unfortunately the decision is not always up to us, especially for young teams without expert recognized by higher-ups. In our case it went something like this: NI is also not very helpful in dampening expectations: The rest is history. To be fair, our case was a single incident. The general experience with contractors has been very positive and insightful. Still, I would probably raise an eyebrow if a CLA told me that they don't know how to work with classes. Just seems weird considering that it is the final stage in the certification process. Funny how you spell "is" ๐Ÿ˜„
  16. Actually had to google that. If I understand it correctly, you are saying that my sentence is phrased in a way that is offensive to you (and others perhaps). That was not my intention. Let me try to explain myself and hopefully clear up what I presume is a misunderstanding. By "A CLA who isn't familiar with fundamental OOP concepts..." I mean somebody who has no prior knowledge about OOP whatsoever. They don't know what a class is and how inheritance or encapsulation works. It is my opinion that this makes them incapable of choosing between different OOP frameworks and libraries themselves (of course they could ask somebody they trust). For the second part "and in worse case puts the entire project at risk by making random decisions" imagine somebody without any prior knowledge about OOP being brought into a project that is heavily based on OOP (i.e. using Actor Framework). They are brought in by management to evaluate the situation and get the product ready for market. Of course management will listen to the CLA as an expert (as they should). If the CLA ignores the fact that they don't know anything about OOP (worse case scenario), the best they can do is decide based on their instinct, feedback from other developers, or simply by tossing a coin and hope for the best. There is a great chance that this will put the project at risk because everyone listens to that expert. I can't be the only one who went down this rabbit hole. The last part "or avoiding OOP because they don't see the value in it" is about changing architecture late in a project because of a personal vendetta against OOP. Let's take the example from before. The CLA might decide that Actor Framework is not a good solution simply because they don't like OOP stuff. So they tell management to toss everything away because it's "no good" and start from scratch using their own approach. Unless the architect has really good arguments, decisions like that are toxic to any project. I have actually experienced a situation that went in the opposite direction (replacing everything with objects because OOP is THE way to go). We eventually reverted to what we had before thanks to the power of Git. (that was probably the most expensive revert we did - so far). Just to clear up one additional point because I believe that it didn't come across in my original post. I believe that there is value in OOP but I don't think it is the only answer to everything. On the contrary. Frameworks like DQMH completely eradicate the need for frameworks like the Actor Framework. Depending on your needs and what you feel more comfortable with, either one is a great choice. I simply expect a CLA to have basic knowledge about OOP, even if they decide against it.
  17. Did you only come here to mock me? If you disagree with something I said, please feel free to express your point of view and perhaps we can find common ground.
  18. I believe it should. OOP (and interfaces in the near future) are architecturally relevant and at the core of frameworks and libraries that drive so many applications. A CLA should be able to assess if a particular framework is a good choice architecturally, or not. A CLA who isn't familiar with fundamental OOP concepts is incapable of making such decisions and in worse case puts the entire project at risk by making random decisions or avoiding OOP because they don't see the value in it. It probably makes sense to focus on fundamental concepts in the exam, because frameworks and libraries eventually get outdated and replaced by new ones.
  19. @Matteo.T You need to start a new topic for your question, it doesn't belong to this thread.
  20. Do you receive any error at the error output terminal? Does the other team see any error in the server logs? Did the other team give you any idea about what kind of web service they provide? Since the other team suggested Postman, chances are high that they use a RESTful service. JKI has a REST API Client library on their website which you could try: JKI HTTP REST API Client for LabVIEW
  21. Unless you work entirely on Linux and OSS, most of the core libraries are closed source. Even if that was not the case, you still need to trust the hardware. That's why it's important to test your mission critical software (and hardware) before you put it in the field. No amount of open source will make your software more secure or reliable. You only get to fix bugs yourself and be responsible for it. To be fair, most of us are probably not doing rocket science... right? "There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more." - Introducing .NET 5 | .NET Blog (microsoft.com) Pointers yes, values no. If you raise a .NET Event with a NULL value, the .NET Event Callback will not fire...
  22. I don't really expect many new language features or UX improvements in LabVIEW just because they stop working on NXG. From what we know there are only a few knowledgeable people at NI who are intimately familiar with the codebase and some of its intricate details which fundamentally drive LabVIEW. There are also many customers who rely on that technology for their own business. Because of that, NI can't just throw more developers at it and change LabVIEW fundamentally unless they find a way to stay compatible or take a bold step and do breaking changes (which are inevitable in my opinion). LabVIEW will probably stay what it is today and only receive (arguably exciting) new features that NI will leverage from the NXG codebase to drive their business. Unfortunately NI hasn't explained their long-term strategy (I'll assume for now that they are still debating on it). In particular what LabVIEW/G will be in the future. Will it be community-driven? Will it be a language that anyone can use to do anything? Will it be the means to drive hardware sales for NI and partners? Will it be a separate product altogether, independent of NI hardware and technology? There are also a lot of technology-related topics they need to address. Does LabVIEW Support Unicode? - National Instruments Comparing Two VIs in LabVIEW - National Instruments (ni.com) Error 1316 While Using .NET Methods in LabVIEW - National Instruments (ni.com) Using NULL Values or Pointers in LabVIEW - National Instruments (ni.com) Not to forget UX. The list is endless and entirely different for any one of us. If and when these will be addressed is unknown. Don't get me wrong, I'm very excited and enthusiastic about LabVIEW and what we can do with it. My applications are driven by technology that other programming languages simply can't compete with. Scalability is through the roof. Need to write some data to text file? Sure, no problem. Drive the next space rocket, land rover, turbine engine, etc.? Here is your VI. The clarity of code is exceptional (unless you favor spaghetti). The only problem I have with it is the fact that it is tied to a company that want's to drive hardware sales.
  23. Make sure that you have the rights to distribute those binaries before you put them in your build specification. There is a license agreement that you accepted when you installed them on your machine. Note that you don't have to distribute those assemblies yourself. Perhaps there is a runtime installer available which your clients can use. As long as the assemblies are installed on the target machine, LabVIEW will either locate them automatically, or you can specify the location in an application configuration file. Here are some resources on how assemblies are located: Loading .NET Assemblies - LabVIEW 2018 Help - National Instruments (ni.com) How the Runtime Locates Assemblies | Microsoft Docs
  24. Here is an interesting note: How LabVIEW Locates .NET Assemblies - National Instruments (ni.com)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.