Jump to content

hooovahh

Moderators
  • Posts

    3,365
  • Joined

  • Last visited

  • Days Won

    268

Everything posted by hooovahh

  1. Unrelated - Nice mouse I have the same one. Well I did until I left it at the office just before a stay-at-home order.
  2. Oh you're drugging up more of my old complaints. WebVIs were to me the most important bullet point to use NXG. Today almost all of my non FPGA code is written to work on Windows/RT/Pharlap/Linux/Mac. I try to do my best to not lock it down to an OS as some arbitrary limitation. So when WebVIs came around I figured I'd just think of it as another target, and the same VI for Windows can be used for WebVIs...nope. New file extension, and various limitations. The reason for this is that the controls on a WebVI aren't the same controls as Windows. The WebVI controls are HTML5 compatible controls that look and behave as closely as the Windows-only counterparts. Almost 2 years ago I was told that NI was moving toward having both platforms use the same control technology, thereby making them more compatible with each other...but well that apparently hasn't happened yet. My specific complain is that in current LabVIEW I can go to Tools >> Web Publishing Tool and create a HTML file that allows me to view and control a VI running in Windows or on RT without having to write any code. Yes it is very limiting on the browsers it supports, and it has issues. But in a couple minutes I can be controlling a VI running on any non-FPGA target from a computer on the network. NXG can do something similar, but there needs to be lots of code to handle communication. And updating one application means having to update the other. Another thing you touched on was the front panel UI. Ugh. Okay so I harp on the fact that we are missing System Controls every chance I get. NI usually pushes back and is like "Well why do you need system controls?" Most likely trying to gauge the importance of it. I say: Look at most LabVIEW front panels and you can know right away that they are written in LabVIEW. I don't want that. I want a program to just look like a normal native system looking program. And when the software is installed on a new system, my application will use those new system looking controls and now look native to that system. Look at how old and drab LabVIEW UIs look, that is why I want system controls. The majority of LabVIEW UIs look like they are old and stuck in the early 2000s. I had one person from NI say "But NXG is new and fresh and modern and doesn't look old". And I said "Well for now, but in a few years all NXG programs will have the same problem, they will all look the same and dated by the new UI standards." And here we are.
  3. In the past I've brought up issues like "You aren't listening to our feedback on the UI". And someone at NI reminded me that NXG started out very different looking. In fact (I hope I can say this) the UI actually had a ribbon interface for a alpha release or two similar to office products. NI claims they listened to our feedback and started over with the UI that contextually pops in on the right. In my opinion, I think that NI would have moved away from ribbon interfaces on their own, just because it had technical limitations, and didn't scale well. This this is an example of the users complaining a lot, and NI changing it for the better. EDIT: Oh and @Mads made a point about how much harder it is to get new customers, than retain the ones you have. I'm not a Linux or Mac user. I probably will never install LabVIEW in either of those environments. But current LabVIEW has some users that do, and zero of them would be supported in NXG. From NI's perspective, what is the effort needed to support them, and what percentage of users can migrate to Windows? I actually lost a bet (pretty badly) with Michael about this. I made a bet with him that one year after NXG 1.0 was released NI would have a Mac or Linux version. We had both been part of the Alpha/Beta of NXG and I figured they were just prioritizing Windows until a stable release.
  4. Using NXG I don't feel like I am starting over, but do feel like I am programming with one metaphorical hand behind my back. I have been part of some of the hackathons NI has hosted at NI Week. There it was important for NI to see developers using NXG for the first time, and seeing what things they could figure out on their own, and get feedback on things. There was definitely more than one moment where I would call someone over and be like "Look I can't do this thing I always do in current LabVIEW" and they would write things down and without giving the answer they would hint that it was possible. After a few seconds of looking in menus, and right clicking several times I would discover it is possible but in a different way. I don't want to sound like I am defending NXG. I've used the web technology in a grand total of 1 project and that's it so my experience level is quite low. And there were several times I would tell someone at NI "Look I can't do this" and they would look confused and ask why I would want to do that. I'd then give several real world reasons I need to do that or I can't migrate to NXG. Which probably why my usage with NXG on real projects is low. Keep the feedback coming. I want to hear what struggles other people are having, just like I assume NI does.
  5. (Thank Michael). I do see this as a slight issue. For now the new LINX is only in the Community Edition, so this one subforum will support both sets of topics. In the future LINX will be its own updated package on the Tools Network, and won't necessarily be part of the Community Edition. That being said I don't think we will be making another subforum just for LINX stuff. I expect the majority of Community Edition topics will be related to LINX, and splitting LINX into Community and Non Community subforums would only split the conversation up. For now the Community Edition subforum has pretty icons showing the Pi, Arduino, and Beagleboard. This will hopefully drive people wanting to make topics on this hardware, into that subforum. Co-mingle away.
  6. Funny you should mention that. LAVA has created a new subforum dedicated to the LabVIEW Community edition, and this thread (among a couple others) have been moved into it. Feel free to post questions comments, and information regarding the community edition there.
  7. Very good thread, keep up the discussion. I just wanted to chime in on one thing with my opinion. I think I understand what you are trying to say with this. But it seems with every version of LabVIEW, NI focused on at least having a couple important bullet points with every release. They likely have to split resources between current and NXG flavors, but just for my own categorization I made a list of features that seem important to me with each release. Looking over the release notes of each version of LabVIEW it is clear that each year lots of work goes into each release. Its just that some years are more packed with features I care about than other. 2020 - Interfaces, 2019 - Maps and Sets, 2018 - CLI and Python support, 2017 - VIMs, 2016 - Channel Wires, 2015 - Right click Framework, 2014 - 64bit support in Linux and Mac, 2013 - Linux RT OS for RT targets, 2012 - Concatenating and conditional tunnels, 2011 - Silver Controls and Asynchronous Call by Reference, 2010 - VI Analyzer and PPLs, 2009 - New Icon editor and Snippet, 8.6 - QuickDrop Fundamentally dataflow concepts don't change, which is a good thing. A LabVIEW developer who started on LabVIEW 8.0 could probably do programming in 2020 just fine. They will discover all kinds of things they never had but it won't be like starting over to them.
  8. Yeah I remember reading somewhere that the zero CPU is missing some instructions or features that LINX uses. I can't seem to find the thread at the moment.
  9. I'm not sure what goes on in the subVI, but I'm pretty sure that method is slower than 4 primitives. That's the major benefit of that method. Yes there is the draw back of having so many cases, but when they are buried in a subVI never to be seen, and the likelihood of having a cluster with more than 256 top level elements? I'd personally still prefer the VIM, given the trade off to performance benefit. But thanks for the alternative.
  10. Yup, looks like what I expected. I assume you mean avoiding non-NI XNodes in your code, because there are several people use all the time. That's fair and when all things are equal I do prefer a VIM over an under(un)supported technology. I do also agree that the number of files are large even for something simple. Anyway thanks for sharing.
  11. I don't have 2019 at the moment so I can't open yours. But is this related to the array of variant to cluster stuff? I made an XNode here that does this. The reverse is just a single variant to data with the data being a 1D array of variant.
  12. Thanks, great incite, and great suggestions. So far that means I see 4 possible solutions. 1) Use a non-strict VI reference and do the To Variant-To Strict VI reference dance I showed in the first post (this does work BTW) 2) Have the terminals be variants, then use variant to data with my class in the VI, and then in the VI with the Wait On Asynchronous Call. 3) Use a queue or some other reference to get the same data, without using the Wait On Asynchronous. 4) Tim's solution. I've gone with the Variant terminal solution 2. The calling and closing of the asynchronous VI is controlled by two private VIs in that class and should always return no error. Had I realized this wouldn't work from the start I would go with solution 3 using probably a DVR, but I already had the VIs written and just needed to add some variant to data calls.
  13. Okay so I have a normal class. In that class is a private VI that I will open a reference to and run it asynchronously using the Static VI Reference, an Open Reference by name, and a Start Asynchronous Call. All normal and all good. I realized I might want to capture the response from this VI once it finally returns in the Close so I keep the reference to this VI in the class. Now for me to be able to get the response using the Wait On Asynchronous Call I need to have the VI reference be strict, which includes things like the terminals used. Notice I have a coercion dot in the image above because the Helper Reference in the class is a normal non-strict VI reference. As soon as I change this to be a strict reference my private data type has an error "VI Refnum 'Helper Reference': Private data control of this class uses an illegal value for its default data." So for now I have a non strict VI reference, going to a variant, then variant to data with the type I want, and things seem to work. Is this just some kind of recursion issue that LabVIEW can't resolve? Is there a proper way of doing this? I also just thought of another solution, that my terminals in the VI could be variants, and then just convert to and from the class data. Is this okay? Thoughts? Thanks.
  14. Okay so I wrote some code back in the 2011 era for doing some graph stuff and never used it. As a result there are a few places that the code could take advantage of modern functions (limited events, array tunnels, conditional, and concatenating, VIMs, even Set and Maps) but in any case I have it here for others to take a look at and use as they want. I don't intend on updating this further. It all started when I found the built in graph controls to be limiting in terms of signal selection and control. I wanted a way for a user to select the signals they want and then show those on a graph with a shared time scale. The problem was at the time the checkbox selector on a graph could have a scroll that couldn't be controlled. So I started with a single column listbox showing all signals and allowing multiple to be selected. I wanted to see the current value so I added that. Scope creep kept going until I'm left with this thing that isn't done, but isn't terrible. In this demo there is a subpanel mode, independent windows, pause and resume, the normal graph palette controls, independent Y axis scaling, coloring, buffer size control, visible signals selection and values, and a few other things. It was intended to be used in places where speed and exact values weren't used. It was more or less a place where all signals of a system could be seen slowly. It uses a few things I've posted on LAVA before. My Variant Repository, Array VIMs, and Circular Buffer. Here is a video. Circular Graph.vipc
  15. Oh also I found a .Net method of doing this. Using the Systems.Windows.Forms.Screen you can get All Screens, and in a loop get the Working Area of each. Still I prefer the non-OS dependent method posted earlier.
  16. There are two major schemes in SCC. Lock-Commit, and Merge. It seems most text based languages don't bother with Lock-Commit since an intelligent text merge can be done pretty easily. Since LabVIEW's VIs are binary, a merge can't really happen on a file level. The Compare and Diff tool NI has made does help, but I've found it at times to be messy. A more rigid approach is the Lock-Commit. This works best when an application has been broken up into sub modules, most often Libraries, or Classes. Then multiple developers can work on separate files at the same time, but lock them so that the file can only be edited by one person at a time. This does take the cooperation of all developers. Blanket statements like "Don't lock the whole project" needs to be something everyone does. Otherwise you will be calling up someone telling them to unlock the files they have. I've gotten used to this over the years, but if you are a new to LabVIEW and you have 1 or 2 huge VIs, then locking them to one user will cause problems. As mentioned NXG uses XML as the file format (with a few binary blobs when needed) and merging them in text has some varying level of success. But with a relatively complicated XML file merging might not always do what the developer expects.
  17. Very neat. So I wanted to update this to return all monitors, and all Windows, and Panel sizes. I also saw that this was using scripting which means it won't be available in the run-time engine. Attached is an updated version that I believe does this. (2018) I also added a feedback node to return the previously read data if it is called again, with an optional input for a full refresh. I did this since I believe changing monitor position and resolutions after and application has started is rare. Still if you do this often you can just wire a True to this. Another option would be to use the Elapse Time, and maybe do a full refresh once every couple of seconds. One thing I also removed was the passing in of a VI reference to get the application instance to use. I wasn't sure why this was being done since regardless of the application instance the monitors and panel bounds will be the same. I realize AQ and Darren often work in private instances, it's just in this case I didn't think it would matter. Please correct me if I'm wrong. I also left in the VI description stating it is thread safe, but am unsure if it still is. Compute Maximum Desktop Bounds Hooovahh Edit.vi
  18. Thanks for your contribution. A couple of things. Using polymorphics for this type of thing can become a pain pretty quickly. I had a similar thing for reading and writing variant attributes and used scripting to generate the 60+ data types I supported. But even then there were times that the data type wasn't supported. This also added 120+ extra VIs (read/write) adding to loading overhead. The more modern way of doing this is with a VIM that adapts to the data type provided. Your VIs were saved in 2015 when VIMs weren't an official thing, but you say you use 2018 where it is. Back then I'd still support doing this data type adaption with XNodes. Posted here is my Variant Repository which does similar read/write anything including type def'd enums and clusters. Putting these in a global space that any VI can read and write from is pretty trivial. All that is needed is a VIG, functional global variable, or even global variable in a pinch. This will keep data in memory as long as the VI is still in memory and reserved to be ran. This has other benefits of easily being able to load and save data from a file since it is all contained in a single place. Also with this technique there are no references being opened or closed, so no memory leaking concerns. Performance-wise I also suspect your method may have some room for improvement. If I am writing 10 variables 10 times, looking at your code that will mean 100 calls to the obtain notifier, 100 calls to the send, and 100 calls to the release notifier. I suspect reading a variant, and then calling the set attribute 100 times will likely take less time and processing power.
  19. So I just discovered this, this morning and I think it will help out in making VIMs when dealing with supporting a scalar, or 1D array data type. I have an example which is my Filter 1D Array VIM, posted here, which is heavily inspired by OpenG's implementation. In it the developer can filter out a scalar, or a 1D array of something from a 1D array. I did this by adding a Type Specialized structure at the start which checks to see if after building the array, if the data type matched in the incoming array. If so it is a scalar and should be used. I then have another case where the data just goes straight through thinking it must already be a 1D array. But what I realized today is that is unnecessary. If in the VIM my input is set to a 1D array, and we add a build array with only 1 terminal, and that build array is set to Concatenate, then that whole structure isn't needed. A Scalar will become a 1D array with one element, and a 1D array will have no items added to it after the build array. In this example the code simplification isn't much, but someone may have had two cases in a type specialized structure which handle scalar and 1D array separately and using this they could be combined them into one. And one other minor thing, I don't think I will actually be updating the Filter 1D Array VIM to use this, just because knowing if the input is a scalar means other sorting work not shown can be ignored helping performance.
  20. Far from perfect, but what I have at the moment, is on Mouse Move, Mouse Down, or Mouse Wheel event (with the limit to 1 just like you), read the Top Left Visible Cell. This gives the Tag and Column number. Using a second property node, write the Active Item Tag, and Active Column Number to what was just read. And then read the Active Cell Position Left. I then use a feedback node to see if the Top Left tag, Column, or Cell Position Left has changed from the last time the event was fired. If it hasn't do nothing. If it has, then go do what it takes to either shift the current image up and down, or left and right. Left and right are property nodes on the controls to move them, but value can stay the same. As for shifting and image up and down, I use Norms low level code found here, which is pretty fast, but I did have to add a couple of op codes to it since his post. Alternatively you might be able to set the image origin to get the same effect but that uses a property node, as apposed to the value of the image control.
  21. Well that or some really complicated stuff is happening. Here is a UI I've been working on lately. It is the view of a sequence that is currently running. On the left is a green arrow that moves to whatever step is currently being executed. Next to this is Goto arrows showing that a step has a condition that might jump to another step. Then there is the tree control that shows the sequence. In it is the step name (I blocked out many) then two columns that show some settings on the step with icons, and a column for a comment on the step. To the right is detailed information about the step that is currently selected. I'm welcome to have some feedback but here is how I made it. The green arrow is in a pane by itself. The arrow is 2D picture control constant, and the position of the control is changed but the value of the control never changes. The set of Goto arrows is another 2D picture control. This time it takes up the whole pane and is redrawn as needed. Since the user can scroll the tree, this means we need to poll the scroll position (on mouse down, mouse move, wheel scroll, limit to 1 event) and if it changed redraw the arrows. The While Loop can be collapsed and if that happens, the green arrow needs to point to it since the current step is in the loop. In this example if the while loop is collapsed, then the Goto arrows need to be removed. This is done by reducing that pane size to 0, making more room for the sequence tree. The tree is set to fill the next pane. The icons in the tree are two separate 2D picture controls (since icons can't be in multiple columns easily, and their size is 16x16). The position and value of them are dependent on the scroll positions of the tree. To reduce weird UI stuff when resizing, the 2D pictures must be aligned to be at the top of the tree control. The 2D pictures aren't transparent because doing so means the Erase First option must be set, which ends up flickering when scrolling since these need to be changed on scroll, window resize, or collapsing of the while loop. Because of not being able to be transparent, and wanting to align to the top, the 2D pictures actually contain the headers as part of their pictures. Maybe I could have done this with another pane. When scrolling too far right or left we want only partial column icons to be drawn so the column width is checked on scroll to ensure it looks right. Also since the pictures aren't transparent, the blue background of an icon is separate image. Oh and the scroll wheel works, and will mess with lots of these things. This mostly works well. But as you might guess it can take some time to draw all these things, figure out where they should go, and get it looking right. At first I had the quick and dirty of on any Mouse Move, Mouse Down, or Wheel Scroll event, just redraw everything. This meant tons of events firing all the time doing mostly nothing, and some times flickering the screen. So then I started looking at ways to improve it. If there is a value change on the tree, we don't need to update the Goto arrows at all, just the icons. Other information can be cached too to help with performance. Maybe we could draw all Goto arrow permutations on start, and then switch between them as needed then we wouldn't have to draw them. We would only need to move them vertically as needed. This is the kinda thing I was talking about when I said diminishing returns. Right now drawing that goto arrow is probably 10ms or so. But by complicating the code we could bring it down to 1ms. Is it necessary? Well no but if the total time of doing things adds up to be too much we could look into that to reduce the performance. Oh and to add to the complicated nature of this, this is the same UI that is used to create the sequence, with dragging and dropping and moving steps around, dragging Goto arrows around, and clicking icons to perform other actions like sliding in another subpanel, or changing the icons. In all of this, the defer panel updates is only used once, and it is on a full refresh which only happens on first load. Everything else we just update what we need and it has been pretty responsive.
  22. You probably already know this, but property nodes have a larger performance hit than a local variable when just the value needs to be updated. But if you want to update it in a subVI I get why you might use property nodes. The alternative there is to use the Set Control Values by Index function. You can read the indexes of the values you want to update in an initialization step, then pass on the indexes, and values you want to update. Of course this exercise, and this topic has diminishing returns. I mean lets say I just update all UI elements, all the time, periodically. The time it takes to update all of this can vary a lot based on what needs to happen but lets just say it takes 100ms which to be fair is a long time. Will the user notice it takes a while? Maybe. Okay so add the defer updates and lets say it is down to 50ms. Okay lets just update the elements that change 30ms on average due to some overhead. Okay use Set Control Values by Index, 10ms and you've added a decent amount of complexity to the code that might not have needed it. So for me it usually starts with just update everything and see how it goes, then refactor as needed. It feels like the lazy method but I've gone down the other road where I'm hyper concerned with performance and timing and I spend lots of time making the code great, but overly complicated which can make the code harder to maintain. Various tools can help minimize these issues, but then there are potential down sides of that too.
  23. I opened it in my 2018 and am missing all kinds of things, but attached is what LabVIEW came up with. In the future you might want to go to the Dark Side, because they have a subforum dedicated to this type of thing. Source 2016.zip
  24. Thanks everyone for giving suggestion and helping out other users. This is outside of my expertise and all I could give is a shrug and pass it along to Michael. It seems like this is isolated to a user, or a couple of user's setup and not a wide spread issue. If there is any real issue with LAVAG I can pass it along.
  25. I've never seen Jeremy's presentation, some neat stuff in there for sure. Since my demo (and another one here involving Notepad, Video here), I have cleaned up the VIs a bit with some basics like set Parent/Child relationship, get relationship, move and preposition, get/set menubar, resize status, and title bar, and a few other style functions. This hasn't been posted publicly but is pretty basic and pulled in from other examples on the forums. A more updated and complete Windows API could be useful.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.