Jump to content

hooovahh

Moderators
  • Posts

    3,365
  • Joined

  • Last visited

  • Days Won

    268

Everything posted by hooovahh

  1. Speak for your self. I am a no-Auto-Tool kinda guy so while my mouse is moving to the new position my left hand is repeatedly mashing Tab, CTRL+E, CTRL+W, CTRL+S, Quick Drop (CTRL+D for me), and all the Quick Drop short cuts. I've been nicknamed the "Certified Keyboard Finger F***er". Most will not admit that they don't like visual programming because they didn't grow up with it. Most will make up some other excuse so to not sound so suborn. If they do admit that they don't like it because it is unfamiliar, then I think something can be said about new developers who have never programmed anything and see which the majority pick up faster. When you have 8 year olds developing MineStorm robots I think we can agree that visual programming is easier to pick up. I've also had others say they don't like LabVIEW because they don't know what the compiler is doing. And I ask them what the compiler in C is doing? What assumptions or optimizations is it doing? EDIT: Do we have a curse word filter on LAVA? We can swear on the internet right?
  2. I don't think it's necessary to remove a VI before inserting another. In my subpanel applications I rarely use the Remove VI function and I've never seen any memory issues which is the only thing I would be concerned with. With the new properties in 2012 you can even get a reference to the VI that is in the subpanel (Inserted VI Property), this is only useful if you have a VI inserted so you may want to keep the current VI inserted until a new one is selected to be inserted.
  3. For those that don't know if you double click a subVI while holding the CTRL key it will open the front panel and block diagram but bring up the block diagram. This also works when the VI is running (quite handy) but it only works if Auto-Tool is off when the VI is running. As for the topic I basically said my concern in the other thread that it may affect my skill as a programmer if I have less visual ques, between the software function and a visual representation of the software function. If this is compensated or improved with some other feature I'm all for it.
  4. Icon View, Auto-Insert Feedback Node, Auto-Tool, Scripting. The default settings I change on each install of LabVIEW. I'm not saying my views on these features won't change, but I do think that it tells you something about what kind of LabVIEW programmer you are if you only ever use the icon view for terminals. I believe there was a CLAD question once that was something of "What are the main components of a subVI". Likely wanting you to say Block Diagram, Front Panel, but I agree that a VI doesn't really need a front panel. And if I choose to remove the block diagram I guess my VI doesn't really need that either. But at some point I feel like we may lose something. LabVIEW is visual (duh) and as a result I feel like my brain makes connections between the subVI icon and the functions it performs. This helps me remember things very quickly and traversing down the rabbit hole of subVIs seems easy because i remember all their functions visually. Likewise my brain makes connections between the front panel and block diagram and knows how controls on the front panel and indicators are coupled. I started this post wanted to say I was on board with removing the front panel, but now I'm not so sure, unless there is again some visual way for me to understand code in a way that is hard to explain. In text based code I find my self remembering what the code does, based on the shape of the text in that line (or surrounding lines). I feel like I'm a better programmer because of these visual connections, and I hope that if the Front Panel goes away, some new method will be there to replace the visual connection that will be lost.
  5. This might help. It can tell you if a VI is in a subpanel or not, then you can have your code perform different operations on clean up if it is stand alone or not. http://digital.ni.com/public.nsf/allkb/FB79ED8B6D07257B86256E93006E31FA
  6. Okay this makes a little more sense. Typically when I design an application like this, I don't allow my VI in the subpanel to stop itself. I only have the parent VI (the one with the subpanel) insert/remove/or stop the VI. You obviously need some mechanism to tell the parent VI that the child VI has stopped if you are going to allow your child VI to stop running like this. There are many ways to do this, the easiest is probably a functional global with an array of references and an array of booleans or enums keeping track of the state of the dynamically loaded VIs. Then your parent VI can read this global data and know to insert a new VI or use the existing reference. Other things that could work are queues, notifiers, user events, and probably many others.
  7. Does the VI Execution >> State property help at all? The VI reference is still valid, LabVIEW still has that VI in memory even after it stops running so it will remain not Not-a-refnum until the VI is no longer in memory. For instance you could use that reference and call the Run VI action and it would start running again.
  8. Very often I will be working on a VI then get frustrated when the block diagram cleanup button is missing...only to realize that I was indeed on the Front Panel. The good news is Jack already has an idea so I don't need to take the time to post it. http://forums.ni.com/t5/LabVIEW-Idea-Exchange/How-about-a-Front-Panel-Cleanup/idi-p/963556
  9. I don't know if this is the "best way" but have you thought about making your enum with Reserved3, Reserved5, Reserved6, Reserved7 placed in between valid values? This way you can have your enum in a case statement, but also directly change from a U8 of value 4 to MotorDisconnected. In the past my numbers have jumped alot, say from 0, 1, 125, 500, 1000 so using an enum with blanks was not a solution, but for this it may be. In my case I was lucky and my enum value was something to the effect of "500 Baud", "1000 Baud" so I got it as a string and converted it to 500 and 1000 decimal but that too is not an option here. EDIT: Is there a valid reason why Enums must be sequential by the way?
  10. I'd start with MAX. If you have DAQmx installed the cDAQ chassis will show up there and you can start a test panel on your devices. Then you'll see the low level inputs and outputs of your hardware making sure the values you see are the ones you expect. Don't start writing tons of code then get strange values and blame the software. If MAX doesn't have the values you expect then LabVIEW won't either. If MAX looks good, then check out some of the DAQmx examples Help >> Find Examples in LabVIEW. The examples have had a big face lift in 2012 and 2013 so I hope you are using one of those versions.
  11. This is the best practice for sure. I always start with a project, or a library. Often times I forget about some kind of linkage that editing just a VI will cause issues with. I like to bring up text based languages and say that if we were writing VB code would we take notepad and open some random file to edit? No we would open the project first.
  12. Okay so in the past a way to force LabVIEW to include some VI was to add it to a part of the code that is never called. That however breaks some of your requirements (like dynamically loading) but can be accomplished today. What about this solution which is also "clunky" and I don't have an example of it working. Alice makes a Pre/Post Build VI that runs which scans the application hierarchy. Alice can know the interdependence that LabVIEW can't, and can write Pre/Post Build VI to know the link without loading the VIs, and include them in the build somehow. Possibly by just copying them to a subfolder that the EXE is in, or by modifying the Build Specifications to have them be Always Included. This can also be done today but sounds like a pain, and again Bob will have to know to use this Pre/Post Build VI, but hopefully he got the project file as a template from Alice which already calls this VI. EDIT: To get number 4 there could be a tag of some kind, maybe a #AlwaysInclude so that any 3rd party could add more assuming they adhere to the standards Alice added to her Pre/Post VI. For the future...I guess there could be some kind of new object similar to the Static VI Reference called Dynamic VI Reference (too confusing?), which would be some kind of flag to the compiler/builder that makes it known that this VI needs to be included, because of a non static dependency, but should not be loaded into memory because of this.
  13. ...well this is quite interesting. I haven't tried it yet but I have some concerns. I notice that not all UI elements are the same size between the VI and the web page. So is it often that you will have UI objects on top of each other? What about subpanels does it work with them? How about splitters and panes? Which brings me to the question about window resizing, and how it handles that. Don't get me wrong very neat, and I like having options, I just see several updates to NI products having similar functionality.
  14. Yeah I guess there are some things a Raspberry Pi does that I would have a hard time getting a MyRIO to do (1080 HDMI output is one) but then again I can't have the Raspberry Pi update a timed processing loop running at 40MHz+ (talking about the FPGA)
  15. I get more excited about the possibilities. This can basically become a headless device that performs a task using AIO, DIO, SPI, UART, Vision, Wifi, Bluetooth, and USB. I have so many applications where someone will ask for a program that does X and just runs without needing a PC. A simple one recently was someone wanted to send a CAN command and have a waveform be generated based on the message. So send a CAN message and a Sine wave is generated at X frequency and Y amplitude. This is quite easy with LabVIEW and a cDAQ CAN module, and analog output module, but needs a laptop. With this I could do it without a PC. I could even add control (pots switches) and a LCD output. What about a headless resolver simulator that simulates a position, or speed? Or a remote logging application where you can do a FTP data dump over Wifi? With the new version of Multisim/Ultiboard you can simulate a circuit then push it down to an FPGA and it will best approximate the simulated circuit, we could put this on a MyRIO and take our simulation into the real world without a PC. Talk about rapid prototype! If an Arduino, and a Raspberry Pi had a baby, it would have nothing on a MyRIO. (okay maybe not exactly true but still)
  16. I can't seem to find the link at the moment but Olivier JOURDAN over at SAPHIR made a String Details custom probe that I've been using for a while that I like which shows hex, and code display of strings. They have several others here but the string details is not one of them. But I totally agree that the probes that ship with LabVIEW could use an overhaul for things like this, along with properly handling resizing of the probe window.
  17. I talked to someone at NI today and they were able to get some slightly more official information (while still being unofficial). Not naming any names, but the word from this individual was that student pricing would be $250, universities was $500, and non-academic pricing would be available for $1,000. To be clear nothing official but I can't wait to get my hands on one of these.
  18. You discovered the mouse-overs? Sounds like you were one of today's 10,000 Relevant XKCD A bit of a stretch but I wanted to see if I could find one.
  19. I can say for sure, but have you tried repairing LabVIEW? Not saying it will fix it, I just have never seen the behavior you are describing.
  20. That is fantastic. I wanted the INI key for a more detailed file list of what has been made in an installer. I had no idea that it would give more detailed failure information. I may do some testing on build time, and if it isn't much more I'm going to use this on all builds I do.
  21. That's how I've done it. Remember a variant property can be any data type, so you don't need to just store the same data, it can be a look up table for any data you feel like storing. WORM uses this technique. I don't use WORM it self, but I use the same concepts in my applications.
  22. How long did you have to run it to see the time to plot increase? I added a feed back node to keep track of how long it takes to clear the graph and I saw 10950ms or less. After running it for about 5 minutes it was around the same. One issue is there is no synchronization between the producer and the consumer loops. You have some loop getting 100 new data points every 1ms. And you have a consumer loop that runs every 10ms getting 100 points at a time. But because your producer runs faster, and overwrites the previous data in the global, the new 100 points your read will not be a continuation of the previous 100 points. I modified your global so that when you do a write you append to the data in the global, and after it reaches the desired size it resets. I tried modifying the VIs and posting them but my LabVIEW has decided it doesn't like your VIs anymore for some reason...it feels like Friday.
  23. I wondered how that worked, that's neat. I for some reason always thought it was all the packages in one, so I never installed it because I was worried I would have two versions of OpenG installed on my palette if I did. Thanks for the info.
  24. Not just the disabled diagram structure. I found one time where my code used the conditional diagram structure to do different things if it was in the runtime engine or not. Both cases were executable and worked fine. Then I changed the VI to be inlined. The VI ran just fine in the development environment, but in the runtime engine case I used a property node which isn't allowed in a inlined VI.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.