Jump to content

hooovahh

Moderators
  • Posts

    3,392
  • Joined

  • Last visited

  • Days Won

    284

Everything posted by hooovahh

  1. Thanks, that's it. I couldn't find it because I was searching for Completed ideas, but apparently it was declined. The Type Specialized Structure was in LabVIEW 2016, and was part of my NI Week demo that year. I frantically downloaded the Mac 2016 version, extracted the installer to get the VIM that contained the structure, so I could update my presentation and demo. When it was a Macro it was an XNode behind the scenes, and even the 2017 beta used XNodes. But in the official 2017 release it was its own technology. XNodes and Classes don't work well together, and locking libraries made editing them challenging so it was necessary to be their own thing.
  2. I do love how VIMs came to be. I'm having a real hard time finding it. But there was an idea on the Idea Exchange that there should be a function that can delay any data type, similar to the OpenG Wait which takes an error in and passes it out. Jeff K. posted on the thread saying something like "Oh yeah that is a thing, you just need to use a VIM, here is an example which uses XNodes." It blew my mind. Then in the next release of LabVIEW for the Mac, Jeff K. sneaked in a new VIM on the palette which some high up in R&D didn't know, which had the type specialized structure in it, which was also unreleased. I downloaded that version just to get the VIM and structure. I get the feeling the reason VIMs seemingly came out of nowhere, is that Jeff was pushing for it for years, and then when it was mostly stable he just put it out there to get the public's response. When everyone saw the potential that he also saw in it, R&D put efforts into getting it out there. This is just my speculation from the outside.
  3. I made a separate thread over here for array performance testing.
  4. OpenG made an amazing set of Array tools, many years ago. They weren't perfect but had many uses and I've recommended many times. Improvements to LabVIEW meant some of the array functions weren't well optimized. Years later I tried making a more modern Array set of tools using VIMs, giving up on Polymorphics. I posted this as a package over on VIPM.IO here. Since then I've thought about a few places where performance of my stuff could be better. Mads and I have had some discussions back and forth in this thread. But I wanted to make a separate post where others could chime in with their performance suggestions too. At the moment my Array VIMs are in LabVIEW 2018. However I think due to the potential Maps and Sets benefit, that I want to go to at least 2019. In 2020 LabVIEW added the Sorted Array subpalette with a pretty decent binary search. So for now I think LabVIEW 2020 will be what I target for the next Array VIMs package release. Any thoughts on this? I know there is a decent amount of bias in this, but Jim posted the versions of LabVIEW used on VIPM.IO and 2020 and newer covered over 75% of users. So attached is zip with a set of array testing VIs. For instance open the Remove Dups Speed VI and run it. It will run through the 6 different methods of removing duplicates from a 1D array of strings. It will then graph the different methods and do a check that they all return the same data. If you want to add your own method edit the (non-typed) enum, then duplicate a case and replace the function with your won code. It randomizes the order of the array methods used. You can also mess with the data being used. At the moment it generates 1000 unique elements, then duplicates them 5 times. If you want to enter data with none to remove, or all of a single type, or whatever then you can change the data to be used in the test. At the moment there are 8 different array speed tests to compare. Things like Delete 1D, Delete 2D, Filter 1D, Filter 1D with a Scalar, Reverse 2D, Search 1D, and the Remove Duplicates already mentioned. There might not be a single best method for a specific function. There are times one method will work better for some set of data, and then a decision needs to be made on what should go in the VIM. My main reason for making this thread is that I hope some people will know of a better way to do something. Come up with a more optimized way to do any of the OpenG array functions. After some discussion, and contribution I plan on updating the Array VIMs package and attribute those that helped. (crosspost) Edit: I just realized someone is probably yelling "Use Git" to me. I hadn't thought of that sorry, it just felt organic to continue the conversation here because it is where the topic started. 872707096_HooovahhArrayPerformanceTest.zip
  5. SubVIs that are called as a function, and don't have the terminals change value after entering the VI, should have the terminals on the root of the diagram, not in sub diagram structures. This is because the compiler can't know if these terminals changed value from the last time they were used, and so it will read them again. If it is on the root of the diagram it reads it when it enters the VI and never needs to read it again. Same with indicators. These should be on the root of the diagram and I think the CLD takes off points if it isn't. https://forums.ni.com/t5/LabVIEW/Community-Nugget-Wired-Terminals-in-Subdiagrams-Clearing-up-the/td-p/2093252 But it is a very minor thing, I just mentioned it as something I'd change, but not something I would expect to affect memory. I worked at Magna Electronics, Magna E-Car, and I think Magna Powertrain was in there somewhere as divisions changed and were absorbed. Making validation and verification test systems for various automotive components like running boards, inverters, chargers, power control modules, and cameras. Good times until it wasn't. I knew this was related because it gave a loading warning that VIs were loaded from a new path, and the old path had Magna in it.
  6. Those VIs are quite simple. If there is a memory leak in that simple of a VI then you need to open a support ticket with NI so it can be fixed in a patch. I see things I would change about the VIs, but nothing that should affect memory. Like the terminals should be on the root of the diagram for subVIs. (Also are you doing work for Magna? I used to work at a couple of their divisions)
  7. Oh I totally agree here and wouldn't suggest otherwise. All I'm saying is that specifically for the Filter 1D there is two main approaches. To perform the filter in a for loop running for all elements in the Filter Array, or running a for loop running for all elements in the Array In. If one is small and the other is large one approach will easily outperform the other, even if it isn't optimized. So maybe deciding which technique could be used based on the size of the Filter Array. Additionally I like the No Duplicates input, because there are often times when I'm filtering an array of numeric indexes, and removing 0 or 1 elements from each Filter Array element can make for lots of shortcuts. I tried optimizing your sort and binary search with this, but the benefit where much since I think the majority of the time is spent sorting the array for large arrays, and then the actual binary search is very quick. Similarly I've added the Array Is Sorted input to my Remove Duplicates VIMs. If you are removing duplicates from an array that has already been sorted there are shortcuts that can be taken that save lots of time. I think the Remove Duplicates I have is very slight improvement to the OpenG method, and I'll stick with what I have at the moment. However your sort, then work does give me an idea I'll test when I get time.
  8. That is pretty fast. I have noticed that on smaller arrays some of the other methods work better. Somewhere between 100 and 1000 elements the first Revision works better, then Revision 2. And the Hooovahh method with No Duplicates is faster then Revision 2 at at least 500 elements. I've also seen that with a smaller number of items to filter, other methods work better then Revision 2. Even OpenG beats it when there are many items, but only a few to filter. In most cases when I'm filtering an array, I have a relatively large set of data, and a small-ish array of things to remove. I'm finding it difficult to know what method is best for the most common use cases. Is it worth wasting some processing time reading the array sizes, and then picking the algorithm that works best for that size? The problem I see with this is various compiler optimizations may take place, making some methods faster or slower, with later releases of LabVIEW. Since my method basically is for all Items to Filter, and your method is for all Array In, I could say if the Items to Filter is 10 or less run the Hooovahh method. If it is greater then 10 use your Revision 2. If the No Duplicates is used, and the items to filter is 500 or less, use the Hooovahh method, otherwise use Revision 2. I'll think about it, but a hybrid approach might not be a bad idea, even if it complicates things a bit. I'd say of the array VIMs on my palette, Filter, and Remove Duplicates are the ones I most commonly use, so getting the most performance out of these would be good.
  9. NI support sent you to LAVA for a LabVIEW issue? Do we get to charge them if we solve it? Are you saying your palettes are messed up? Can we see a screen shot? If they did get messed up I a reinstall of LabVIEW should fix it so I'm not sure what else is going on. OpenG shouldn't mess with your already installed palette items. It just adds their own by putting the menu files into a folder that LabVIEW then finds.
  10. I might be wrong, but I don't think your Revised v2 is working. The Result array doesn't have the same number of outputs as the other modes. My Array VIMs package at the moment is 2018 and newer so I don't mind conditional tunnels, inlineing, or VIMs (obviously). No Maps or Sets yet, but maybe one day. That being said my Filter 1D is already pretty good with your previous help. OpenG is 1.3, Revised 0.7, my version with your help is 0.7, and if I have the No Duplicates input to my version it is 0.4. I did go through your other VIs that you said had changes, and some had a measurable improvement, some were close enough to what I already had.
  11. Wow there is a lot to go over here. I'm unsure how long it will take to run various performance tests on these. I'm sure they are better then the native OpenG stuff, but I need to also compare them to the changes I made. I see you tend to reuse input arrays which is a great idea. I tend to use indexing and conditional tunnels, with my suspicion being your method is better. Another thing I tend to do is reuse functions in other functions. Like how my Filter 1D Array uses the Delete Elements From Array inside it. This makes for more readable code, and improvements to the Delete function will improve the Filter function. But I will have to run some tests to see if there is a measurable improvement with yours.
  12. You should look into a local LabVIEW User Group. I hear NI is making a renewed effort in these and they are a great opportunity to meet and talk to local LabVIEW enthusiasts about common interests. I personally find the thought of going on my own very daunting. I know several people that have made it on their own, finding contracts, and executing projects successfully. I assume they like the work, and it must pay really well. But for me I'm just happy enough being the LabVIEW Overlord for a company. The thought of having to be my own sales force, finance department, and project manager, on top of the documenter, designer, and developer roles sounds like a lot of work. I'd rather work less for less money.
  13. Okay as with most things, there is some nuance. If the number of elements being deleted are very small, the OpenG method is faster, but it has to be pretty small, and the main array you are deleting from needs to be pretty large. Attached is the version that I think works well, and supports sorted, or unsorted indexes to delete with the same output as the OpenG method, which includes the deleted elements. Methods of deleting multiple array elements Hooovahh Test.vi
  14. Hey that's a pretty cool speed test. Even if you turn down the samples to something more reasonable like 100, or 1000 the OpenG method still loses by an order of magnitude. Would you mind if I adapted your code into the Hooovahh Array VIMs package? At the moment it is basically the OpenG method, with an optional input on if the indexes to remove are sorted already or not. The OpenG method returns the deleted elements, and there is a some book keeping that needs to take place if that array isn't sorted. But if your method works the way I think it's performance with or without sorted indexes should be similar. Also if anyone sees performance improvements for that array package I'd be interested in adding them. Most of it is the OpenG methods, with a few changes to help performance. EDIT: Oh the OpenG method does work with unsorted elements to remove, and returns the deleted elements in the correct order. I think the Shift and subarray, still can generate the same output, but needs extra work to track things which might eat into that time difference.
  15. Anything is fine. Just mentioning Brian Hoover (Hooovahh) in the VI description, and possibly linking to this thread in would be fine. I put it out with no restrictions. That being said there is a very small chance that some day I will release a Dialog & User Interface pack on VIPM.IO which could include this.
  16. The problem with this idea, is that the ico file contains multiple images in it, at different resolutions. You could in theory, take the LabVIEW image constant, save it to a temporary PNG file, then use that path to set the icon. But I think you'd be better off with an ico file itself. You can embed the ico file in the VI as a string constant, and do the same thing, saving it to a temporary location as well.
  17. I think all you need is a static VI reference, and then use the VI's name to open a reference instead of the file path. Here is an example I made years ago. https://forums.ni.com/t5/LabVIEW/building-an-executable-with-vits-with-Labview-2011/m-p/2384984#M740405 By dropping a static VI reference, LabVIEW knows it needs to include it in the built application as a dependency. It will then be in memory, and you can just reference it by name. If you actually want to replace the VI used at runtime, with one on disk, then yes you need the path to be a known good path. But if you just want to open a reference to a thing, and have it be included in the build, a static VI reference is the way to go.
  18. That is some ugly ass code for sure. I'm fairly certain I didn't create that ring, and instead just copied it from some other example set of code. I can never see myself center justifying a control like that so I'm guessing I just got it from something else, and then cut and pasted code until it worked. Enum and format into string is the way to go. That being said I'm pretty sure I would have tested this on a Linux RT machine and didn't see a crash at least running in source.
  19. With a slightly snarky tone, I want to ask if this is part of the 100 year business plan NI has. On a personal level I just hope LabVIEW can stay relevant until retirement. I do still have a perpetual license to 2022 Q3, which supports Windows 11. So even if NI goes away I'll be able to be in my language of choice until 11 is no longer supported. LabVIEW has changed the way I think about programming in such a way that I think it is hard to go to other languages. My brain thinks in parallel paths, and data dependence, not lines of code and single instructions. Whenever I develop in C++ I can't help but feel how linear it is. I'm sure higher level languages are better, but at the same time I don't really want to change. As long as I can work at a place that needs test applications, and doesn't care how they are developed, I'll be happy pushing LabVIEW. The fog of the future is hard to see though. The next year or two looks very uncertain in my career. But looking at the past, working in LabVIEW has felt like winning the lottery. Thinking about this helps me stay positive.
  20. I just downloaded the 0.2.1 posted here and it works just fine. Make sure you are logged in to download. I just tried it and it still works just as well as I remember it. This does require flash, which most browsers don't support, but you can find a standalone Flash Player by Adobe. I ran that locally and it worked as expected.
  21. I agree, and it does at times sound desperate. But also is this just how things are in the corporate world? Like do they really care how they are perceived if in the end they get what they want? They could offer more money, or they could just first do a marketing campaign. Relatively low risk, maybe it doesn't work out but I'm sure people who are in charge of these kinds of acquisitions have a playbook, that I'm unfamiliar with. It sorta feels like we are the kids in a divorce proceedings. Just going along with little or no influence on what happens to us. I hope weekday dad buys us a new DVD player.
  22. If you are talking about my original post, the code is still there and can be downloaded just fine. Why does everyone have problems downloading this except me? https://lavag.org/applications/core/interface/file/attachment.php?id=8434
  23. I haven't used ChatGPT yet. But from what I've seen the power of it comes in the conversation like threads it can make. I saw someone ask it for advice on how to get kids to eat vegetables. It gave a list of things on how to eat them, but it was pretty general. They then were able to refine the request and say they needed advice specifically for children, and it came back much better. Any examples that seem very shallow and unimpressive, are likely just a single line of a request, and not a conversation asking it to refine or be more specific. I have been having fun with StableDiffusion and AI generated images. This too has the same problem that you most often can't just put in some text and get something awesome. Most of the time you need to refine it, over and over, tweaking things, making decisions about what you are looking for. Both in the prompts and in the parts of the image and how you want it to change. I made a thread on the dark side about some of my experimentations. In that thread is my new Linkedin profile picture. This stuff is moving so very fast. People are making changes to their work flow to have AI generate concept art, or inspire other things like writers block, alternate endings, or generating tiling texture for surfaces in a game. It isn't replacing industries, it is another tool to get jobs done. Of course you can combine these two things. Here someone asked ChatGPT to explain whey AI art isn't real. And then asking it again to say why it is superior.
  24. I've never used any of these people for training, but they have done training in the past. Samuel Taggart, Chris Roebuck, Fabiola De la Cueva, Jeffrey Habets, and Neil Pate and some I found. All I did was google LabVIEW people advertising they had a CPI. These people are pretty easy to find on Linkedin.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.