Jump to content

Neil Pate

Members
  • Posts

    1,156
  • Joined

  • Last visited

  • Days Won

    102

Posts posted by Neil Pate

  1. 33 minutes ago, hooovahh said:

    Using NXG I don't feel like I am starting over, but do feel like I am programming with one metaphorical hand behind my back.  I have been part of some of the hackathons NI has hosted at NI Week.  There it was important for NI to see developers using NXG for the first time, and seeing what things they could figure out on their own, and get feedback on things.  There was definitely more than one moment where I would call someone over and be like "Look I can't do this thing I always do in current LabVIEW" and they would write things down and without giving the answer they would hint that it was possible.  After a few seconds of looking in menus, and right clicking several times I would discover it is possible but in a different way. 

    I don't want to sound like I am defending NXG.  I've used the web technology in a grand total of 1 project and that's it so my experience level is quite low.  And there were several times I would tell someone at NI "Look I can't do this" and they would look confused and ask why I would want to do that.  I'd then give several real world reasons I need to do that or I can't migrate to NXG.  Which probably why my usage with NXG on real projects is low.  Keep the feedback coming.  I want to hear what struggles other people are having, just like I assume NI does.

    hooovahh, we have been giving feedback for > 5 years. Nobody with any authority to direct change seems to be interested.

    The thing I cannot understand is this... the engineers intimately familiar with LabVIEW today are the engineering managers of tomorrow. NI is pissing off the engineers of today who are the ones signing the purchase orders of tomorrow.

    I never intended for this post to descend into a rant session, I am just disappointed that after so much investment by NI this is the product that has been laid on the table. There was no need to revisit change every single decision in current gen, most of the paradigms worked really well. I would literally hold captive anyone even remotely interested in LabVIEW and gush wildly over its amazingness, like a parent gushing over their favourite child. Now when people ask me about NXG I sort of blink and stare into the distance and change the conversation.

     

    • Like 2
  2. 35 minutes ago, hooovahh said:

    Fundamentally dataflow concepts don't change, which is a good thing.  A LabVIEW developer who started on LabVIEW 8.0 could probably do programming in 2020 just fine.  They will discover all kinds of things they never had but it won't be like starting over to them.

    And this is another thing that makes the jump to NXG less palatable; it is like starting over.

     

    • Like 2
  3. @Aristos Queue, the problem is the GUI is everything. It does not really matter what is going on under the hood (although all us CG devs really appreciate the regular strides forward).

    I have yet to speak to another dev who has tried out NXG and is really excited about the majority of changes that have been made. It *really* feels as though very few actual current LabVIEW devs were consulted in the process. I am sure you will say that NI have done studies and had focus groups etc etc (which I totally believe), but to me personally the new changes suck.

    I have used the Digital Pattern Editor which has the same GUI framework as NXG and the MDI nature of the GUI sucks so much. All the time I need to be able to look at several different things, and MDI makes this an exercise in frustration.

     

    • Like 2
  4. I must admit I have not fully grokked all the changes with classes and types in NXG. Last time I looked it was a mess so I just left it for a while.

    Your screenshot makes me sad for a bunch of reasons. So much ugly grey except for the totally random bits of blue thrown in.

    Now, lets think about the lack of consistency here.

    1. On the top left the file icon has a blue background to show us it is selected, ok cool
    2. So in the tree view Library.gcomp is selected, but it is not blue... ok... why?
    3. The tab in the middle pane is not coloured blue to indicate it is current... ok... why?
    4. On the right we have a tab structure and the Document tab which is currently selected is highlighted...

    Sigh... am I the only one who sees stuff like this or am I just crazy and looking at it totally wrongly?

    Why are the tickboxes in the middle coloured blue? This is so random.

    image.png.2d22254f8ae84831c809e0c094d8946d.png

     

    Now, I am not a graphic designer, and I am sure that NI has done their due diligence in designing this GUI that we spend 40+ hours a week interacting with, so I will give them the benefit of the doubt (again...) but I just don't get it.

    • Like 2
  5. 4 hours ago, JKSH said:

    For example, I could previously have multiple enums called "State" in my project because each copy is in a different class/library: ClassA.lvclass:State.ctl and ClassB.lvclass:State.ctl. However, NXG forces globally unique names for enums/clusters.

    Yikes, this I did not know!

    This was probably decided by the same committee that came to the conclusion that nobody really should be using Virtual Folders anyway. 🙄

  6. 6 hours ago, smithd said:

    This is kind of an interesting concept to me and its one I've been curious about. Just based on my own anecdotal experience it seems like the thing holding labview back is less the UI of the editor and more the combination of "not a real programming language", python being the thing taught in schools, relatively high costs, limited developers and general unavailability of a great many experienced labview devs outside of a pretty insular community, and the deployment/runtime situation (labview doesn't get a free pass like dotnet, javascript, and in most cases java).

    I'm genuinely curious if there is data out there which justifies the things nxg is focusing on.

    Don't forget NI is a hardware sales company. LabVIEW is just the tool that allows them to sell more hardware. I suspect this is why there is such a heavy push to have hardware integration directly in LabVIEW NXG (to the detriment of the system as a whole). NI have decided to get in on the whole cloud business with SystemLink but again they have missed the point that by far the majority of LabVIEW devs just dont care about this or wont use it.

    Happily, the community edition will be able to compete on price with all the other zero cost languages out there, but candidly I feel that the ship has sailed for younger devs, and those holding LabVIEW tickets are still at home putting their pyjamas.

  7. 12 hours ago, Darren said:

    I think @Aristos Queue added that in LabVIEW 2019.

    Thanks Darren. Now I just need to remember this feature exists. I work with 2015, 2017 and 2019 regularly so it can be tricky remembering what things are available in which version. Especially interesting is your change to the menu ordering for clean up wire, I have been trying to remember this one for nearly a year now! (I know I can turn it off but eventually I will be using only greater than 2019 so I may as well get used to it)

  8. GitKraken is free for public GitHub repositories, I never tried it with anything local but I think you are right in that the free version does work as well. I spent a few weeks with the free version and happily handed them some money for the pro version. Its a really nice client. I actually bought a second license as I had the GitHub client on my wife's computer and it was so terrible in comparison.

    • Thanks 1
  9. 41 minutes ago, bmoyer said:

    or some bug not allowing many simultaneous IMAQ connections to all start up at the same time.

    That is interesting! I actually came to the same conclusion. I was trying to launch 4 of my own style actors which each opened an IMAQ reference to a (different) camera and quite often trying to do this launch in parallel would cause the system to hang. I changed it to a serial launch process and the problem never happened again.

    • Like 1
  10. 1 hour ago, drjdpowell said:

    Thanks for the ideas.  For clarification: I don't actually want multiple running instances of the EXE, rather, I need to find out why it is crashing silent.  I am running multiple EXE instances to try and increase the amount of deg info (since I can try different configurations in each).  So far, my only clue is that all running copies using the dll die within seconds of each other, which implies a common trigger event.

    Sadly, there is no obvious issues using dependency walker, and no network connections to the dll.  Problem seen on multiple PCs, running single or multiple.  Trying to rebuild app in LabVIEW 2019 as a test, but proving difficult (builds broken, and it is a very large app). 

    Probably not related to your issue... but once upon a time I had a nasty piece of hardware that I had to interact with using their DLL. On a certain type of PC it would randomly crash in LabVIEW after some number of days. The same PC running a simple python script calling the same DLL functions ran without issue for weeks at a time. In the end I had to change the PC 😞 and the problem never happened again. I tried every combination of DLL settings I could think of, but nothing ever got it working nicely on that PC.

  11. 29 minutes ago, Antoine Chalons said:

    Neil, can you explain what made you leave Bitbucket for GitHub?

    Sure. My primary motivators were:

    1. Performance (it always seemed slow even to access the web interface, and ssh pushes and pulls often took forever)
    2. The war is over, Git has been crowned the victor. I backed the wrong horse about 8 years ago and now it is time to move on. The only reason I chose Bitbucket in the first place was their support for Hg.
    3. The way they have handled the move to Git was terrible.
    4. GitHub now has free private repositories, and as of yesterday you can have teams for free

    Realistically, the only thing keeping me on Bitbucket for several years was the pain of moving all 100 of my repositories over to somewhere else. They forced my hand with this and I have not looked back since.

    Since then I have found other good things:

    1. GitKraken is incredible
    2. Visual Studio has really nice (simple) support for Git with the Git extension
    3. I don't have to remember how to pronounce Atlassian anymore
    • Like 1
  12. 46 minutes ago, drjdpowell said:

    Since I have just painfully converted many years of Hg repos to Git, I thought I'd make a quick note here.   I used the procedure described here: https://helgeklein.com/blog/2015/06/converting-mercurial-repositories-to-git-on-windows/

    Here is a screenshot of the Windows cmd window:

    1611374327_HgtoGit.png.fa5fa656f0b25496425523ac843952de.png

    After this one needs to push to the new git repo on Bitbucket (though you could use another service).   I could not do this from the command line, but I could push after opening the repo in SourceTree (which I had previously set up to use the right private key to talk to Bitbucket).

    Personally, I stayed with Bitbucket, partly because I could transfer my Issue Tracker history.   It involved Exporting the Issues (under Settings), and then Importing to the new git repo.  

    It was quite painless, except for the tedium of converting many many repos.

     

    James, I took this as a good opportunity to move away from Bitbucket and to GitHub. Their import tool worked pretty well but it was a bit tedious as you say.

  13. 21 minutes ago, hooovahh said:

    You probably already know this, but property nodes have a larger performance hit than a local variable when just the value needs to be updated.  But if you want to update it in a subVI I get why you might use property nodes.  The alternative there is to use the Set Control Values by Index function.  You can read the indexes of the values you want to update in an initialization step, then pass on the indexes, and values you want to update.

    Of course this exercise, and this topic has diminishing returns.  I mean lets say I just update all UI elements, all the time, periodically.  The time it takes to update all of this can vary a lot based on what needs to happen but lets just say it takes 100ms which to be fair is a long time.  Will the user notice it takes a while?  Maybe.  Okay so add the defer updates and lets say it is down to 50ms.  Okay lets just update the elements that change 30ms on average due to some overhead.  Okay use Set Control Values by Index, 10ms and you've added a decent amount of complexity to the code that might not have needed it.

    So for me it usually starts with just update everything and see how it goes, then refactor as needed.  It feels like the lazy method but I've gone down the other road where I'm hyper concerned with performance and timing and I spend lots of time making the code great, but overly complicated which can make the code harder to maintain.  Various tools can help minimize these issues, but then there are potential down sides of that too.

    My programs are so well optimised the user never notices the delay 😉

    Joking aside, if I am updating that many controls that the GUI is slow to paint it is probably symptomatic of a bad GUI design.

  14. I have tried many different techniques but my currently preferred one is to assume upfront that I am going to use property nodes for all value updates so this then allows me to have a nice Update Sub VI where I update all the values every time I need to update anything. Any non-trivial GUI seems to require property nodes to do the "fancy" stuff, so I just start off with them like that. I normally don't bother with a Defer unless I am dealing with a *lot* of indicators (which is probably a smell of a badly designed GUI) or one of the usual suspects like a big table or tree control.

  15. Thanks Rolf, that sounds quite plausible.
    AQ this is a brand new CPU with a Ryzen 3600 in it. I would consider changing the CPU if NI could give some guidance on what to change it to.

    Is there any sort of hot fix or patch I can try?

    For now I am out of trouble as I have rewritten the VIs that I needed to use from that library (mine were really simple) , but I doubt others will be so lucky.

     

  16. @Aristos Queue I have the PC back in my possession and have run the VI you asked.

    Here is the result.

    image.png.5167740edf8a232daa01deb161b3fd44.png

     

    Now, if I immediately try and drop Mean.vi I can see it is trying to load it from the correct directory

    image.png.de30e750867916d6095945194aa784aa.png

     

     

    But if I do into the VI and open the Call Library Function node I get the e:\builds\penguin stuff

     

    image.png.7eb8dca366d3dcf5d9d86433d51bb08b.png

     

    The directory on disk is correct:

     

    image.png.f08724fecd93b2b99346e28e8c9dea19.png

     

    Does this make any sense to you?

     

     

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.