Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 12/13/2018 in Posts

  1. 7 points
    I've exported the OpenG sources from Sourceforge SVN to Github. It's located here: https://github.com/Open-G I'm hoping this will encourage collaboration and modernization of the OpenG project. Pull requests are a thing with Git, so contributions can be encouraged and actually used instead of dying on the vine.
  2. 6 points
    @Jim Kring, it seems to me that the export of the code has gotten a positive response from the community. However I may be wrong. If anyone has any opinion either way, please come forward. As you can see in this thread, it appears the community has rallied around this effort. This is why I emailed you to come here and share your thoughts. In the past, OpenG was a great venue to showcase how a bunch of passionate LabVIEW users can come together and collaborate on something useful. The passion is clearly still there, as shown by the numerous discussions here. The general coding community has moved to Git with GiHub being the hub. This seems like the logical next step. Who knows what this initiative will lead to. However, I’m expecting that placing OpenG in a neutral GitHub repo will provide the spark and the tools to facilitate open collaboration, then the community can drive the future. The community is full of smart people who have a desire for clean tested code. And if issues come up, LAVA discussions (or GitHub issues) are there to hash things out. When LAVA offered to host all OpenG discussions back in 2011. it was clear that the community wanted to help. When @jgcode put his standards together for how code should be discussed at that time, It was an exciting time. Since then, many people have come forward with offers to add new code into OpenG and fix bugs. For example @drjdpowell first offered to include his awesome SQLite toolkit for inclusion into OpenG. He got no response either way. It’s a shame to have a platform and forums to allow people to post and discuss OpenG code and then ignore it. If you have ideas on what the future of OpenG is. I’m hoping it’s to be more transparent and inclusive. Providing the tools, resources and some safety checks along the way, is the best way to facilitate passionate individuals to dive in. Do you think keeping the status quo of the past 10 years makes sense? It seems to me that the community disagrees. What do you think?
  3. 6 points
    I just started down the rabbit hole of making a new XControl recently. Oh man such a pain. Here is a little graph I made complaining about the XControl creation process, and the time needed to make something useful. Any alternative is appreciated.
  4. 5 points
    The preference to use init methods instead of non-default class constants is so strong, LabVIEW NXG is planning to never support the feature. Any class constant there will always have only the class’ default value.
  5. 3 points
    After I made this post I decided to bring the LabVIEW Wiki back online. It was not easy and took several days of server upgrades and hacking. The good news is I was able to bring up all the original pages.. The even better news is I talked with @The Q and @hooovahh and we are all on the same page as to how to move forward. @The Q did a great job of stepping forward and trying to fill the void that the LabVIEW Wiki's absence had left. He's agreed to migrate all the new content he created over to the LabVIEW Wiki, from Fandom and continue to develop new articles and content moving forward on the new site. He will also help in moderating the Wiki and will be promoted to Admin rights on the Wiki. His help is much appreciated. The LabVIEW landing page created here on LAVA is awesome but the forums don't lend themselves to static content creation. Instead @hooovahh has agreed to move the old landing page to here. That will be the new home for the landing page. This will become a valuable resource for the community and I hope all of you start pointing new people in that direction. With many editors, it can only get better and better over time. Where do we go from here: Logging in. - The old accounts are still there. If you're a LAVA old-timer, then you can try to login using your LAVA username. If the password doesn't work then reset it. You can also create a new account here. I'm going to announce a day when new accounts can be created. I'm limiting it for now because of all the spam accounts that can be potentially created. There's an issue with the current Captcha system. if you are super-eager to start creating content now and want to help, send me a direct message on LAVA and I can manually create an account right away. - New account creation is now open. Permitted content: - I'm not going to put restrictions on content at the moment. Obvious vandalism or offensive\illegal content will not be tolerated of course. However, the guidelines will be adjusted as time goes on and new content is created. There's just not enough content right now to be overly concerned about this. We need content. Discussions about the Wiki. - Each article page has an associated discussions page where you can discuss issues related to that article. Please use that mechanism (same etiquette as wikipedia). General Wiki issues\questions and high level discussions can be done here. So now, if you need to add content, you can do it yourself. Feedback as always is welcome.
  6. 3 points
    I agree with James. That could be achieved through composition and adding an abstraction layer. (Sink and Source in the diagram below)
  7. 2 points
    Merry Xmas:Symlink.llb
  8. 2 points
    The only correct, intended, non-hack ways to get a reference to an existing clone VI is to use the "This VI Reference" node on the clone VI's own block diagram and have the clone send that reference to someone else as part of its own execution. Any other mechanism was never intended to work and is generally unstable outside of very specific use patterns. There is one mechanism available, and I requested that R&D leave it working even though it can be unstable, and that is the Open VI Reference node when you pass a string. You can pass the qualified name of the original VI followed by a colon followed by an integer to get a reference to that specific clone. The big big big caveat is that you need to close that reference before the clone is disposed. This technique is used by the LV Task Manager, and that tool is the only reason that this feature remains instead of being fixed as a bug. Unfortunately, it really isn't possible to make this feature stable without a significant performance hit on calls to clone VIs. It wasn't intended to work, which is why there's no official way to do it, just the devs forgot to close Open VI Reference loophole. Even if you do get a reference to one clone, any properties you set on that clone while running will be set only on that clone. Likewise, anything you set on the original VI ref will only be set on the original VI (with exception of breakpoints). LV has intentionally never created a "me and all my clones" ref (the reasons why are a topic for a different discussion thread). If you make changes to the original VI BEFORE the clones are replicated, then the clones will have any changes you make. That generally means never calling the original VI as a subVI and instead always calling it through Call By Reference node.
  9. 2 points
    It depends what you want to do with the memory and how but in principle it is pretty easy. This function will simply return error 2 when the allocation was not successful. The challenge is to use this allocated buffer with built in LabVIEW functions. Depending on what functions you may want to use this with, you could for instance pass in the buffer in a VI in which you read the binary file in chunks and copy each chunck into this buffer with the Array Replace Subset function. Memory management is a bitch and you have to often choose between preallocating memory and passing it all the way down a call chain hierarchy to use it there or to let the low level functions attempt to do it and pass the result up through the Call Chain. LabVIEW chooses for the latter and that has good reasons. The first is a lot more complicated to implement and use and has generally less performance since you tend to copy data twice or more (when using streams for instance which at each data direction inversion will usually involve a data copy). Allocate Array Buffer.vi
  10. 2 points
    I just tested on a fresh 2019 machine and I also needed SuperSecretListboxStuff=True in my INI.
  11. 2 points
    As a community member and package consumer I want to support your efforts.
  12. 2 points
    Screwdrivers are $29.99 though, that's how they get you.
  13. 2 points
    So, I've gotten word that some of y'all are concerned about me because in the last year, I basically vanished from both the LAVA forums and the NI forums other than the Idea Exchange. One person at the CLA Summit in Europe last week wondered if I'd died. Honestly, I had no idea that I'd dropped my post frequency down so low. What happened was that in May the forums that I monitor exploded with content. This is generally a good thing -- says the community is vibrant. But it meant that I went from having tens of posts in each forum to hundreds of posts, and the deluge overwhelmed me, and I started ignoring the backlog. That's just turned into a natural attrition. I'm going to try to get back to reading (and posting) the forums. I doubt I get back to the high rate of read that I had before -- I've got a lot more internal-to-NI stuff that I deal with on a daily basis these days -- but I intend to get back to posting at least every couple days instead of every couple months. And, just to be clear: no, I'm not dead. :-)
  14. 1 point
    Note: if you have a Tools Network package, you can request assigned ranges of error codes for just your library. For example, "SQLite Library" errors start at 402860
  15. 1 point
  16. 1 point
    Security in the age of cloud computing and IoT is a huge challenge. I do not think we should start on that discussion here. But you guys seem to assume that that means we can resort to the old ways of doing business - That NI and others should not leverage these things fully, but try to guide their users to the old ways by making sure the new ones are intentionally crippled. The fact is that most of us are way down the rabbit hole already, ignoring the risks because the benefits are too enticing or the business or societal pressure too high. If people can make a business of delivering services that are at the same level of risk as the customer is already taking in other areas (in fact in the particular case that triggered my interest in this - the security would be improved compared to the current solution - imagine that), but NI is holding them back because they think the security challenges has to be 100% solved first...well...that is a recipe for a dwindling business. The starting point of my digression was something that the supplier in fact is already partly working on. They just have not gotten around to it yet. So it is not like you are defending something that they themselves think is the holy grail of security limitations either. Arguing that the current solution is as good as it gets is never really a winning strategy.
  17. 1 point
    The first discussion with Fab, Dani and Jerzy Kocerka was about where to keep the code. We quickly came to the conclusion that it would be great if GCentral did not host their own repositories on their own servers, but rather was able to tap into existing services like Github, Gitlab, Bitbucket and so on. That might also help with acceptance. Personally, I would like to keep our code in our own repos at gitlab.com. We have our readme's, our documentation platform, etc etc. But if there was an easy way to plug into the GCentral website of available code, I'd love to register our offerings (whatever that might be worth!) and see it featured there. And also the other way around: I'd like it if I could find not only properly packaged code from the three main repositories (VIPM, NIPM, GPM) on GCentral, but also other offerings in other formats. We like to keep as many dependencies as possible inside our project directories, so we work a lot with packages that are not installed via a package manager but either extracted/copied into the project directory or maybe sometimes linked as git submodules.
  18. 1 point
    Thousends of releases? I kind of doubt it. Leaving away LabVIEW prior to the multiplatform version (2.2.x and earlier which only were Macintosh) there have been 2.5, 3.0, 3.1, 4.0, 5.0, 5.1, 6.0, 7.0, 7.1, 8.0, 8.2, 8.5, 8.6, 2009, 2010, 2011, 20012, 2013, 2014, 2015, 2016, 2017, 2018, 2019 releases so far. Of each of them there was usually 1 and rarely two maintenance releases, and of each maintenance release between 1 to 8 bug fix releases. This does probably only amount to about 100 releases in total and maybe another 50 for beta releases of these versions (a beta has usually 2 to 4 intermediate releases although that tends to be no more than 2 in the last 10 years or so). I'm not aware of any LabVIEW release that had the debug symbols exposed. PDPs were used even in Microsoft Visual C 6.0, the first version that was probably used by NI to create a released LabVIEW version (NI only switched to use Microsoft C for the standard 32-bit builds for Windows NT, the Windows 3.1 versions of LabVIEW were created using Watcom C 10.x which was the only compiler able to create full 32-bit executables that could run on 16-bit Windows 3.1 through the built-in DOS extender runtime). Microsoft makes this anyhow pretty hard to happen by accident as such DLL/EXE files would normally link to the debug version of the Microsoft C Runtime library and you can't install that legally on a computer without installing the entire Microsoft C Compiler contained in the Visual Studio software. There is definitely no downloadable installer for the debug version of the C runtime engine. The only early leak I'm aware of was that the original LabVIEW 2.5 prerelease contained a huge extcode.h file in the cintools directory that showed much more than the officially documented LabVIEW manager functions. About half of it was still pretty much unusable as you needed other functions that were not exposed in there to make use of some of the features, and a lot of those functions were removed from the exported functions around LabVIEW 4.0 and 5.0 as they were considered obsolete or undesirable to pollute the exported symbols list, but it did contain a few interesting functions that are still exported from the LabVIEW kernel but not declared in the current extcode.h file. They fixed that extcode.h bug before the official release of LabVIEW 3.0, which was the first non-beta version of LabVIEW running on other computers than Macintosh. (2.5 was basically a beta release called prerelease version to have something to show for NI Week 1992 that runs on Windows 3.1, and there was a 2.5.1 and I believe 2.5.2 bug fix release of it in 1993). Also lvrt.dll is a development that only got introduced around LabVIEW 6.0. As this was released in 2000 it most likely used at least Microsoft Visual Studio C++ 6.0; Before that the application builder was concatenating the generated runtime LLB to a stub executable that contained the entire LabVIEW runtime engine. That was a pretty neat feature as it created a single file executable, but as LabVIEW was extended and more and more functionality implemented in external files and DLLs that get linked dynamically, this was pretty much unmaintainable in the long run and the entire runtime engine was externalized.
  19. 1 point
    If you grew up on Fortran I would have hoped they had let you retire by now. No rest for the wicked?
  20. 1 point
    I noticed on sourceforge that there is a version 4.2 of OpenG Zip. Will it be released as a package anytime soon?
  21. 1 point
    The OpenG package file is simply a ZIP file in disguise, with an ini style package file (called spec) in the root that describes where the different file groups should go to in your LabVIEW installation and with restrictions for what version and platform they apply for. If you have 7-Zip installed you can right click on a *.ogp file and select 7-Zip->Open Archive. Then look in the directories for "File Group 8" and in there is the ogsetup.exe file. This is an Inno Setup file that installs the necessary packages into the correct NI Shared location for RT packages. I choose to do it this way as the files have to be installed in a location that has only write access when the process is evelated and rather than requiring the user to restart VIPM explicitly as admin (and trying to guess the correct location to write the files to from within a post install hook VI), I created an Inno Setup installer for the necessary files with an embedded manifest that requests elevation authorization from the user. After that and provided you have full cRIO support for NI Max for your target installed on your machine, you should be able to select the package in the Custom Software Install from within NI Max. Basically I choose to only extract the ogsetup.exe file into a LabVIEW 32-bit installation, since this is the only way to program LabVIEW RT programs anyway. I figured that the chance that someone would want to install SW packages to a RT target from a computer that is not used to program that target too, would be a very unlikely situation.
  22. 1 point
    View File Plasmionique Modbus Master This package contains the Plasmionique Modbus Master library for LabVIEW. It supports RTU, ASCII and TCP modes with the following function codes: 0x01 - Read Coils 0x02 - Read Discrete Inputs 0x03 - Read Holding Registers 0x04 - Read Input Registers 0x05 - Write Single Coil 0x06 - Write Single Register 0x07 - Read Exception Status 0x0F - Write Multiple Coils 0x10 - Write Multiple Registers 0x16 - Mask Write Register 0x17 - Read/Write Multiple Registers 0x2B/0x0E - Read Device Identification Other features include: - Sharing a COM port across multiple Modbus sessions using VISA locks (10 second timeout). - Sharing a Modbus session across multiple communication loops. - TCP transaction ID handling to ensure that requests and responses are matched up correctly in case responses are received out of order. - Modbus Comm Tester, available through the "Tools->Plasmionique" menu, for testing communication with a slave device without writing any code. - Detailed help document available through the "Help->Plasmionique" menu. Examples are included in "<LabVIEW>\examples\Plasmionique\MB Master\": MB_Master Comm Tester.vi: Demonstrates usage of API to open/close connection and communicate with a Modbus slave device. MB_Master Multiple Sessions.vi: Demonstrates usage of API to open concurrent Modbus sessions. MB_Master Simple Serial.vi: Demonstrates polling of a single input register over serial line. Download a copy of the user guide here: MB_Master - User Guide.pdf Note that Version 1.3.4 of this library has been certified compatible with LabVIEW and has been released on the LabVIEW Tools Network: http://sine.ni.com/nips/cds/view/p/lang/en/nid/214230 The most recent version of this library will always be released on LAVA first before going through NI's certification process. ***This project is now available on GitHub: https://github.com/rfporter/Modbus-Master Submitter Porter Submitted 04/01/2016 Category LabVIEW Tools Network Certified LabVIEW Version 2012 License Type BSD (Most common)  
  23. 1 point
    That's why you are paid so much. Or if you aren't; you should be.
  24. 1 point
    I suspect UTF uses "Set Scope", which is a separate method that doesn't do the propagation. The UTF authors may have written their own propagation loop. You could use the same workaround (I acknowledge how annoying that would be to write, but at least the option exists). BTW, looking at code, turns out that the "And Propagate" version was written to always prompt. That's how the Library Properties dialog works. On my machine as of this morning, there's a new Boolean parameter to "skip prompt" on that method.
  25. 1 point
    You may want to consider using a Data Agnostic Smart Probe instead of a subVI. From that probe, you can get a reference to the calling VI. And I think there are ways to figure out which probes are on which wires, etc. Here's my nugget post on Data Agnostic Smart Probes: https://forums.ni.com/t5/LabVIEW/Darren-s-Occasional-Nugget-02-23-2018/m-p/3759109
  26. 1 point
    The "Individual Offline Installers" link below the download button points to the offline installer.
  27. 1 point
    Thanks. Issue 34 created.
  28. 1 point
    The link at the bottom will bring you to this page http://www.ni.com/en-us/support/downloads/software-products/download.labview.html#305508 You just have to select 2018 SP1 Patch and click download. This will download the f4 (latest) patch. The en-us version is pretty stable in my experience (except for a few dead links here and there). Can't say much about the others. I'm still surprised they bother to check the associated SSP. Considering that the License Manager checks it anyway.
  29. 1 point
    I would create a Clone method in the class. If you inherit any classes you need to also use the: "Call Parent Method", so the parent classes attribute will be clones as well.
  30. 1 point
    Nope. You may want to extend that courtesy but it isn't required for BSD.
  31. 1 point
    Here is a quick and dirty edit. It allows for column separators to be moved, but I noticed that on resize it will set the column widths. So this means if you manually move the columns, and then resize the control it may change the columns in an unexpected way. But at that point you can manually move the separators again. I only have 2017 and 2018 so this is for 2017 and newer now. Variant_Probe-2.4.3-0.ogp
  32. 1 point
    Thanks Antoine for your workaround to fix the Labview crash after installation of the 1.4.0.15 version on also LV2018 SP1. Your tip just helped me, too. I had Labview crashing also with the previous 1.3.0.12 version installation on LV2015 SP1, there for some reasons it helped to install all required packages package by package with the JKI VIPM. Apart from that recent installation issue the Control class UI Tools addon is a fantastic tool, and we use it frequently. Thanks François!
  33. 1 point
    A Radio-button has a natural interlock mechanism. You can customise the booleans to be regular buttons and also change their positions to get a 2D grid type feel if that is what you want.
  34. 1 point
    That may be a while - but the package file itself in the top of the thread.... Edit: Thinking about, because I'm dependent on the ZMQ bindings which are not available on the NI Tools network, I'm not sure I can put this package (and the SHA-256) library on the NI Tools network either - so it will always need to be installed from manually downloaded vipm files.
  35. 1 point
    Hmm, there was a problem I had there but I thought the version I packaged had fixed it. My current development version should find that path - but it depends quite a lot if you have multiple Pythons installed on your machine. BAsically there doesn't seem to be a bullet proof way of getting the correct path in Windows.... That's a sensible idea - it's going into the development code. That's largely a result of the test client being mainly aimed at debugging the protocol and for testing message handling rather before moving on to code to more tightly integrate LabVIEW programs with the remote kernel. That said, I'm in the process of adapting the client to allow different methods of locating and connection to the kernel and that will include suppressing the kernel shutdown message on exit. I'm also (very slowly) working on an implentation of a LabVIEW universal-binary-json serialiser/deserialiser with a view to creating some custom ipython messages for transferring binary data efficiently between LabVIEW and Python. The idea is that the LabVIEW client would create message handlers at the Python end that would allow LabVIEW data to be pushed directly into the Python namespace or to request python data to be sent back to LabVIEW. Don't hold your breath though, the day job comes first...
  36. 1 point
    I mostly post these over on Twitter, but here's some LabVIEW memes I've made: LabVIEW Style Checklist: "Size the block diagram window no larger than the screen size." Me: ✅
  37. 1 point
    Are there plans to fix all the bugs? I've just spent a month writing two xcontrols and all that was mainly finding workarounds to the bugs (and I still don't know why some of the work-arounds actually work)
  38. 1 point
    See the posts starting here. You have to use opkg to install sqlite on a Linux RT system.
  39. 1 point
    The root loop is definitely per process. It’s simply the primary thread started up by Windows for a process in which the GetMessage(), TranslateMessage(), DispatchMessage() Windows API calls are made over and over again, with minor LabVIEW specific adaptions. This thread is associated with a hidden window whose window procedure then translates everything into a Platform independant message infrastructure that very much resembles the Mac classic message loop. This is basically the famous root loop with the message procedure in the hidden window being a sort of platform wrapper around it. Under Windows there comes in a potential extra complication as the OLE marshalling hooks into the process GetMessage() API, completely outside the control of LabVIEW. So if you interface with OLE/COM/ActiveX and to some extend even .Net compenents things can get interesting,
  40. 1 point
    For fun I thought I'd make a list of the reasons I can remember why people choose sometimes choose UDP over TCP. Connection overhead of TCP (initiating a connection) Mainly a big deal with web browsers (each page has to connect to several domains, and each takes a few (usually 2 I believe) TCP connections, which introduces latency) This is part of why HTTP/3 exists Not a big deal for a 2 hour test where you open one connection Don't need packet de-duplication or re-transmits video streaming or there is an application-specific usage pattern that makes application-layer handling of faults the better route (HTTP/3) This application needs reliable transmission as it does not implement reliability at a higher level Want to avoid ordered transmission/head-of-line blocking This really means you are implementing multiplexing at the application level rather than at the TCP level -- its a hell of a lot easier to open 10 TCP connections, especially in applications on closed networks which are not "web scale" This is the reason HTTP/2 exists. HTTP/2 has connection multiplexing on TCP, HTTP/3 has connection multiplexing over UDP. Given the reliable transmission and rate requirement, I'm assuming ordered transmission is desired Want to avoid congestion control Bad actor attempting to cause network failures or: self-limited bandwidth use This application falls under this category or: Implement congestion control at the application layer (HTTP/3) Memory/CPU usage of tcp implementation Erm...labview Network engineers want to heavily fiddle with parameters and algorithms without waiting for the OS kernel to update HTTP/3 is supposed to be faster because of this -- TCP is tuned for 20 years ago or so its been said, and HTTP/3 can be tuned for modern networks I'm assuming this is not Michael On a closed network, for this application, its hard to see a benefit to UDP. (It occurs to me Michael never said it was a closed network, but if he put a pharlap system on the internet...😵)
  41. 1 point
    This is a very good question, and while I will try to answer, at the moment I have mostly a hazy vision that I have not fully worked out the details of. My priority is and has been to elaborate a good design for the core of Rebar, then create enough of an implementation of it that you can do some useful things in it and can see a path towards other useful things. How much the design and implementation of Rebar/VI interop gets worked out will depend very much on how much demand there is for it. A VI calling a Rebar function should look about the same as calling a subVI. A Rebar function's signature will contain information equivalent to the inplaceness information computed by the VI compiler for a VI, so the VI compiler will know when it need to copy the data that it sends to Rebar. Similarly, a Rebar function should be able to call a VI; the Rebar compiler may need to dig out inplaceness information for the VI that is normally invisible to the user. One difference in this direction would be that Rebar should be explicit that it is obtaining a specific clone of the VI and calling that, which it might wrap up into a closure-like object. So then the question is what kind of data you can pass back and forth between Rebar and VI. The rules here are that Rebar cannot pass any raw values to VI that VI will not guarantee the invariants of. Specifically: Rebar can't pass its own references, or anything containing its references to VI, since VI won't do lifetime checking. Rebar can't pass values that have destructors, because VI isn't guaranteed to call them when appropriate. For any types that Rebar cannot pass raw to VI, it must wrap them in refnums. This amounts to having a reference-counted shared object between Rebar and VI, so there are still some Rebar types that wouldn't qualify, but it should be enough to allow the most interesting Rebar-created values to VI and have the runtime maintain their invariants. In this way, you could define a TCP connection type, a file handle type, an IMAQ image type, or whatever you want in Rebar, and provide an API for it back to VI with refnums. This would have the nice result of allowing you to re-implement many parts of the LabVIEW runtime in Rebar. That's about as far as I've got on interoperability. Obviously the devil is very much in the details here, and often creating an interop system between two different languages is way harder than designing each one in isolation. Like I said above, though, none of this matters much unless Rebar is interesting enough that people want to use the two side-by-side.
  42. 1 point
    I use the dll wizard often and it has saved me a lot of time. It took awhile to understand how to massage the .h files into something digestible but once I got the hang of it, it was worth the effort. The error handling is pretty good at pointing you to the offending spot in the .h file. I think that the wizard has improved over time so if your last exposure was years ago I would give it another try.
  43. 1 point
    It is a matter of finding the right API calls in the LibreOffice API and using them. For charts, I think the easiest method would be to add a chart sheet and playing with its parameters.
  44. 1 point
    I think the biggest mistake from NI was to not add 20 years experienced user into their development team. I believe they only put marketing people and some developer... but no real user. Benoit
  45. 1 point
    Killer feature: new hardware only supported on NXG 🤦‍♂️
  46. 1 point
    Overview In order to quickly and efficiently prepare source for distribution, a build system was necessary to abstract away the conversion of source into the different types of deliverables (VIPackages, Executables, dlls, ect) as well as abstract away the build order of our software hierarchy. Many have undertaken to solve this problem. I don't claim to have created a silver bullet. But I do hope that the system I've put together (and am releasing as open source) will act as a starting point for you to extend and customize to meet your needs. I've endeavored to employ good software development principles including separation of concerns, and the SMoRES principles. I'll be the first to volunteer that it isn't perfect and as always, our best software is constantly a work in progress. However, I believe the build system is at a stage to be at least moderately helpful to a handful of people in our community. Description The Application's UI The UI is designed to guide someone through the build process, allowing them to select what components or exports they would like to build, if and how they would like to be notified about the build, auto submission options, and source code control. I've attached a small video titled "Build UI Demo.mp4" below. UML and APIs All UML and API documentation are included in the Word document per released zip file Software Requirements LabVIEW 2017 NI Application Builder VI Package Manager Pro Other dependencies are listed in the Instructions per zip file. UML Overview As of 1.4.0-58 the UML looks like: Build UI Demo.mp4 Release Notes 1.4.0-123 (Component Builder 1.4.0-123.zip) Added new NI Package Manager API Added new NI Package Manager BuildSpec Utils Added procedure and documentation for the component template and its anatomy 1.4.0-113 (Component Builder 1.4.0-113.zip) Added documentation for the SCC API 1.4.0-111 (Component Builder 1.4.0-111.zip) 1.4.0-99 (Component Builder 1.4.0-99.zip) Added a few new P4 API functions allowing the creation of a session if a user is already logged it. Added Log in and log out tests to the test suite. Adding new function to tag all p4 paths in a label with the label. Resolving the input path to a p4 depot path Adding quotes around p4 paths. 1.4.0-91 (Component Builder 1.4.0-91.zip) The refactor of the LabVIEW SCC API is complete and the build process is linked to the new install location. The LabVIEW SCC API can now be used independently of the Component Build process. I've included a test suite for the P4 implementation of the SCC API. It assumes that you have checked in the two files in the "Build Instructions\LabVIEW SCC API\Test Suite\Tests\Test Dir" into perforce. 1.4.0-85 (Component Builder 1.4.0-85.zip) This is a major refactor in the SCC API. I've modeled the p4 label with a new api. This release is primarily as an intermediary release. I intend on breaking SCC out of the component builder into its own separate component in the next release. 1.4.0-81 (Component Builder 1.4.0-81.zip) Created a "proxy" api for VI Package manager interactions. Sometimes the VI Package manager api would hang. So I now call by reference and will kill and restart VIPM if it doesn't respond in time. 1.4.0-75 (Component Builder 1.4.0-75.zip ) Fixed reference counting for executable builds. 1.4.0-73 (Component Builder 1.4.0-73.zip) In the case where a user has specifically unchecked "auto increment", the build process will auto increment the build. Builds must be auto incremented. I've released a new build of the container that has a minor bug fix. Added file utility tools to aid with using "net use" to move and copy files across the network. 1.4.0-59 (Component Builder 1.4.0-59.zip) Minor spelling error in component template: "componet". Added documentation for the Custom Install Step Launcher. Added build instructions for Custom Install Step Launcher. 1.4.0-58 (Component_Builder_1.4.0-58.zip) Added the ability to export NI-Source Distributions, NI-Executables, NI-Insatllers, and NSIS Installers Released a template component that exports an NI Package and VI Package including step by step instructions Released "Custom Install Step Launcher" to execute VIs as pre install, post install and post install all actions for NI Packages. Released comprehensive documentation included in the zip file. Component_Builder_5_31_2018.zip Component_Builder_10_11_2018.zip
  47. 1 point
    I've made a recent entry on the Idea Exchange about this. Here’s the VI I’m experimenting with, referred to in that link: Array of Variants to Cluster of Variants.vi It converts arrays (up to size 50) into clusters of variants, which can be then converted to clusters.
  48. 1 point
    Norm - this is really cool. I think this part of the last video really hits home and defines the benefits of this design pattern, highlighting why using strings would cause an epic fail. I agree with Shaun in that strings are generally easier to read than integers in a Case Select, but in this case it doesn't really matter - the benefits are huge (you get some readability from the casted type anyways). I can see the benefit of this straight away esp for scripting, and will be adding this to my templates folder. Thanks for posting!
  49. 1 point
    I struggled with this question for a while too, though not specifically with regards to XControls. If I'm understanding your question, first you have to ask yourself, "should I use an observer pattern or a publish-subscribe pattern?" Often these two terms are used interchangably but I think there are important differences. In an observer pattern, the code being observed has no knowledge of any observers. It might have 20 observers; it might have 0. It doesn't care and just continues doing what it's doing. If you think of observing something through a telescope, the thing you're looking at generally doesn't know you exist, much less that you are interested in it. In a publish-subscribe pattern the subscriber has to register with the publisher and the publisher generally keeps track of who the subscribers are and how many subscribers there are. Consider subscribing to a newspaper; you call up the publisher, they record your name and address, and then they deliver the paper to your roof until you tell them to stop. Which pattern you choose depends on how you want to manage the lifetime of the publisher/observable code module. If you want the module to self-manage it's lifetime and stop only when nothing depends on the events it generates, use the publish-subscribe pattern. If you're willing to manage the module's lifetime yourself or if you don't care if the module stops while other code is waiting on its events, use the observer pattern. User events work pretty well for the observer pattern. However, if you expose the User Event Refnum, be aware that observing code can destroy the refnum and generate an error in the observable code. I prefer to expose the Event Registration Refnum and keep the User Event Refnums private. That protects the observable code from malicious code and inexperienced developers. The downside is that it's harder for the observing code to dynamically register/unregister for a subset of the events the observable code produces. I've experimented with using an event manager class as mediator between the observable code and the observing code. The event manager registers for all the events the observable modules expose. The observing code then tells the event manager which events it is specifically interested in. I think there must be a better way but I haven't figured it out yet. I don't have a very good feel for implementing a robust publish-subscribe pattern. My sense is injecting user events into the publisher isn't the best way to do it. Callback VIs? Separate subscribe/unsubscribe methods for bookkeeping? I don't know; I haven't explored it enough. For the observer pattern, I prefer option A. I have an example on my other computer. I'll try to post it later today. I agree with everything you said except this. I believe the user event queue exists at the event structure, not the the user event refnum or event registration refnum. If there are no registered event structures, there is no queue to fill up. Is the resource overhead of generating a user event on a user event refnum or event registration refnum that is not wired into an event structure high enough that this is something we need to worry about? Or is this just an easier way to manage the bookkeeping of which events the listener is interested in? Since the user event refnums and event registration refnums are strongly typed, you can only put them in an array if they have the same data type. What's the recommended technique for dynamically registering/unregistering for events that have different data types?
  50. 0 points
    I'm an avid GoT fan, but I'm waiting for Season 8 to arrive..... 🤣


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.