Jump to content

hooovahh

Moderators
  • Posts

    3,360
  • Joined

  • Last visited

  • Days Won

    268

Posts posted by hooovahh

  1. That is pretty fast.  I have noticed that on smaller arrays some of the other methods work better.  Somewhere between 100 and 1000 elements the first Revision works better, then Revision 2.  And the Hooovahh method with No Duplicates is faster then Revision 2 at at least 500 elements.  I've also seen that with a smaller number of items to filter, other methods work better then Revision 2.  Even OpenG beats it when there are many items, but only a few to filter.  In most cases when I'm filtering an array, I have a relatively large set of data, and a small-ish array of things to remove.

    I'm finding it difficult to know what method is best for the most common use cases.  Is it worth wasting some processing time reading the array sizes, and then picking the algorithm that works best for that size?  The problem I see with this is various compiler optimizations may take place, making some methods faster or slower, with later releases of LabVIEW.  Since my method basically is for all Items to Filter, and your method is for all Array In, I could say if the Items to Filter is 10 or less run the Hooovahh method.  If it is greater then 10 use your Revision 2.  If the No Duplicates is used, and the items to filter is 500 or less, use the Hooovahh method, otherwise use Revision 2.  I'll think about it, but a hybrid approach might not be a bad idea, even if it complicates things a bit.  I'd say of the array VIMs on my palette, Filter, and Remove Duplicates are the ones I most commonly use, so getting the most performance out of these would be good.

  2. NI support sent you to LAVA for a LabVIEW issue?  Do we get to charge them if we solve it?  Are you saying your palettes are messed up?  Can we see a screen shot?  If they did get messed up I a reinstall of LabVIEW should fix it so I'm not sure what else is going on.  OpenG shouldn't mess with your already installed palette items.  It just adds their own by putting the menu files into a folder that LabVIEW then finds.

  3. I might be wrong, but I don't think your Revised v2 is working.  The Result array doesn't have the same number of outputs as the other modes.  My Array VIMs package at the moment is 2018 and newer so I don't mind conditional tunnels, inlineing, or VIMs (obviously).  No Maps or Sets yet, but maybe one day.

    That being said my Filter 1D is already pretty good with your previous help.  OpenG is 1.3, Revised 0.7, my version with your help is 0.7, and if I have the No Duplicates input to my version it is 0.4.  I did go through your other VIs that you said had changes, and some had a measurable improvement, some were close enough to what I already had.

  4. Wow there is a lot to go over here.  I'm unsure how long it will take to run various performance tests on these.  I'm sure they are better then the native OpenG stuff, but I need to also compare them to the changes I made.  I see you tend to reuse input arrays which is a great idea.  I tend to use indexing and conditional tunnels, with my suspicion being your method is better.  Another thing I tend to do is reuse functions in other functions.  Like how my Filter 1D Array uses the Delete Elements From Array inside it.  This makes for more readable code, and improvements to the Delete function will improve the Filter function.  But I will have to run some tests to see if there is a measurable improvement with yours.

  5. On 2/20/2023 at 11:22 AM, Mark Moser said:

    Sorry if this is rambly I rarely have the opportunity to talk to other career LabVIEW developers. Thanks for your input!

    You should look into a local LabVIEW User Group.  I hear NI is making a renewed effort in these and they are a great opportunity to meet and talk to local LabVIEW enthusiasts about common interests.  I personally find the thought of going on my own very daunting.  I know several people that have made it on their own, finding contracts, and executing projects successfully.  I assume they like the work, and it must pay really well.  But for me I'm just happy enough being the LabVIEW Overlord for a company.  The thought of having to be my own sales force, finance department, and project manager, on top of the documenter, designer, and developer roles sounds like a lot of work.  I'd rather work less for less money.

  6. Okay as with most things, there is some nuance.  If the number of elements being deleted are very small, the OpenG method is faster, but it has to be pretty small, and the main array you are deleting from needs to be pretty large.  Attached is the version that I think works well, and supports sorted, or unsorted indexes to delete with the same output as the OpenG method, which includes the deleted elements.

    Methods of deleting multiple array elements Hooovahh Test.vi

    • Like 1
  7. Hey that's a pretty cool speed test.  Even if you turn down the samples to something more reasonable like 100, or 1000 the OpenG method still loses by an order of magnitude.  Would you mind if I adapted your code into the Hooovahh Array VIMs package?  At the moment it is basically the OpenG method, with an optional input on if the indexes to remove are sorted already or not.  The OpenG method returns the deleted elements, and there is a some book keeping that needs to take place if that array isn't sorted.  But if your method works the way I think it's performance with or without sorted indexes should be similar.

    Also if anyone sees performance improvements for that array package I'd be interested in adding them.  Most of it is the OpenG methods, with a few changes to help performance.

    EDIT: Oh the OpenG method does work with unsorted elements to remove, and returns the deleted elements in the correct order.  I think the Shift and subarray, still can generate the same output, but needs extra work to track things which might eat into that time difference.

    • Like 1
  8. On 2/10/2023 at 7:07 PM, bjustice said:

    If I were to turn this into a VIP, is there an appropriate way for me to credit you?

    Anything is fine.  Just mentioning Brian Hoover (Hooovahh) in the VI description, and possibly linking to this thread in would be fine.  I put it out with no restrictions.  That being said there is a very small chance that some day I will release a Dialog & User Interface pack on VIPM.IO which could include this.

    • Like 1
  9. The problem with this idea, is that the ico file contains multiple images in it, at different resolutions.  You could in theory, take the LabVIEW image constant, save it to a temporary PNG file, then use that path to set the icon.  But I think you'd be better off with an ico file itself.  You can embed the ico file in the VI as a string constant, and do the same thing, saving it to a temporary location as well.

  10. I think all you need is a static VI reference, and then use the VI's name to open a reference instead of the file path.  Here is an example I made years ago.

    https://forums.ni.com/t5/LabVIEW/building-an-executable-with-vits-with-Labview-2011/m-p/2384984#M740405

    By dropping a static VI reference, LabVIEW knows it needs to include it in the built application as a dependency.  It will then be in memory, and you can just reference it by name.  If you actually want to replace the VI used at runtime, with one on disk, then yes you need the path to be a known good path.  But if you just want to open a reference to a thing, and have it be included in the build, a static VI reference is the way to go.

  11. 2 hours ago, Antoine Chalons said:

    why on earth did you use a ring and a property node on that ring to get the method (md4, md5, etc.)?
    an enum with a format into string seems nicer.

    A, also, if your VI used on Linux and built as a shared library (in order to run the app as a service) then it causes a crash.

    That is some ugly ass code for sure.  I'm fairly certain I didn't create that ring, and instead just copied it from some other example set of code.  I can never see myself center justifying a control like that so I'm guessing I just got it from something else, and then cut and pasted code until it worked.  Enum and format into string is the way to go.  That being said I'm pretty sure I would have tested this on a Linux RT machine and didn't see a crash at least running in source.

  12. On 1/20/2023 at 3:00 PM, Rolf Kalbermatter said:

    Their whole behavior sounds like the little child that sits in a corner and starts mocking because the world doesn't want to give him what it feels is his natural right to have.

    I agree, and it does at times sound desperate.  But also is this just how things are in the corporate world?  Like do they really care how they are perceived if in the end they get what they want?  They could offer more money, or they could just first do a marketing campaign.  Relatively low risk, maybe it doesn't work out but I'm sure people who are in charge of these kinds of acquisitions have a playbook, that I'm unfamiliar with.

    It sorta feels like we are the kids in a divorce proceedings. Just going along with little or no influence on what happens to us.  I hope weekday dad buys us a new DVD player.

    • Sad 1
  13. 1 hour ago, ShaunR said:

    My first impression is that it's an excellent natural language interpreter but not impressed with the claim that all our coding jobs are in danger. I'm much more impressed with the Graphics AI's such as Midjourney.

    I haven't used ChatGPT yet.  But from what I've seen the power of it comes in the conversation like threads it can make.  I saw someone ask it for advice on how to get kids to eat vegetables.  It gave a list of things on how to eat them, but it was pretty general.  They then were able to refine the request and say they needed advice specifically for children, and it came back much better.  Any examples that seem very shallow and unimpressive, are likely just a single line of a request, and not a conversation asking it to refine or be more specific.

    I have been having fun with StableDiffusion and AI generated images.  This too has the same problem that you most often can't just put in some text and get something awesome.  Most of the time you need to refine it, over and over, tweaking things, making decisions about what you are looking for.  Both in the prompts and in the parts of the image and how you want it to change.  I made a thread on the dark side about some of my experimentations.  In that thread is my new Linkedin profile picture.

    00008.jpg.a4790de70b5ad074609a05b0391cd47c.jpg

    This stuff is moving so very fast.  People are making changes to their work flow to have AI generate concept art, or inspire other things like writers block, alternate endings, or generating tiling texture for surfaces in a game.  It isn't replacing industries, it is another tool to get jobs done.  Of course you can combine these two things.  Here someone asked ChatGPT to explain whey AI art isn't real.  And then asking it again to say why it is superior.

    • Like 1
  14. I've never used any of these people for training, but they have done training in the past.  Samuel Taggart, Chris Roebuck, Fabiola De la Cueva, Jeffrey Habets, and Neil Pate and some I found.  All I did was google LabVIEW people advertising they had a CPI.  These people are pretty easy to find on Linkedin. 

  15. Sorry that wasn't clear from your message.  Personally I want to work less, not more, so I am not the right person for helping you with this.  I know there are several people that specialize in LabVIEW training.  I'd search for those with a CPI, or other training background.

  16. 3 hours ago, ShaunR said:

    I vaguely remember a third party that was offering an alternative solution that was implicitly geared towards LabVIEW and used the TPLAT. But I can't remember who it was or how it worked.

    There was a couple, but I never used any of them.  I think BLT is one that meets the needs, and I think Wirebird Labs had one, but that hasn't had any update in forever.

    • Like 1
    • Thanks 1
  17. Is this a sequence editor?  If so I don't see the tab design being very scalable.  Here is a screenshot of something we do.  It has a tree control on the left with all the different step types, with categories for them.  Then the user can drag and drop an icon over to the right.  Once they do a dialog will come up with the settings for the step they selected.  They can also double click a step on the right for the settings for that step to come up again.  There's lots of extra stuff like custom step limits, which is a slide out, and visual arrows if a step has a condition for jumping.  Loops are seen as a tree on the right where you can drag and drop into or out of loops, and rearranging steps is also a drag and drop.  There is also columns for icons that can be clicked, which is actually two picture controls because you can't have multiple glyphs in a tree, and if you can hack that in the icon size has to be real small.

    1227274798_SequenceEditor.png.33571dc57561b2c79f72c72d8a2eb6f8.png

    Each step has a typed cluster with the settings for it, that get flattened to a variant.  You might not need to get this fancy, but a listbox of step types, and a listbox for the sequence might be a good start.

    • Like 2
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.