Jump to content

A Scottish moose

  • Posts

  • Joined

  • Last visited

  • Days Won


A Scottish moose last won the day on June 9 2017

A Scottish moose had the most liked content!

LabVIEW Information

  • Version
    LabVIEW 2016
  • Since

Recent Profile Visitors

1,933 profile views

A Scottish moose's Achievements


Newbie (1/14)



  1. Hey everyone! I've looked through google and the lavag forums for this problem and haven't found much so I thought I'd ask and see if anyone had some ideas. Problem: I have an executable that is launching applets from several different PPLs. I am defining what PPL should be loaded by passing an INI file as command line arguments and also providing any other important start up information in that file. I've been trying to also force the icon on the application to set so that, depending on what the user is doing, the icon matches the process that I'm presenting them. Originally I thought this to be rather trivial as LabVIEW provides an invoke node to set the icon but after attempting it (and then checking the documentation) I realized that it is not available at run-time, (also it doesn't support .ico files according to the docs) http://zone.ni.com/reference/en-XX/help/371361R-01/lvprop/vi_set_vi_icon/ My hope is that I can force the application icon on the start bar and the title bar to match the .ico file that I'm providing to LabVIEW from the ini file. Any thoughts on this? I assumed it would be rather simple, just a function call to the right property/invoke/activex node but I've not come up with much at this point Thanks for your help, Tim
  2. Hello everyone, TL;DR - Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? I've got some library code that would be valuable but the project itself doesn't justify AF. I'm brainstorming ideas for a tester that I'll be building over the next few months. The project that I'm currently winding down is an Actor Framework test system that has come out really nice. Part of the project was an actor built to handle all of the DAQ and digital control. The main actor spins it off an tells it when and what tasks to launch. It send back data and confirms commands, uses Dynamic events to keep up with the generated signals, and uses actor tasks to ship the data back to the controller. Works amazing. Nothing revolutionary for sure but very handy. This next project doesn't really need actor framework, it's much smaller and has a lot smaller list of requirements. That being said I'm curious about integrating my DAQ actor (Dactor, because who can resist an amalgamation?!) into the project. Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? Is this even possible based on how the AF tree is designed to work? Thanks for reading! Tim
  3. Anyone know of a way you can do a ctr+drag on the block diagram (adding or removing space) that only impacts the current structure? I find myself wondering if this functionality exists often enough I thought I'd ask. Basically if you have a for loop inside a case structure is there a way to do a drag inside the for loop and grow that structure without impacting the case structure size or position? Thanks, Tim
  4. Yes. They are... however this was out of the classic palette. Perhaps you've taught me something new. I was under the impression that only system dependent controls are in the systems palette but I bet you are correct. The system button in the classic palette is OS dependent. Good catch Don't listen to my previous posts then because my conclusion was incorrect. Also thank you for your observation. Tim
  5. Update: I was able to replicate the original look with a 'Flat Square Button' and some color matching in about 5 or so minutes. Just be aware that if you move to sp1 of 2016 some of those graphics will change on you. Replicating them isn't a big effort however. Cheers, Tim
  6. Hello Everyone, I noticed an interesting change today that I thought was worth mentioning. I've been using the classic system button for some of my UI's. It's flat and looks clean. Less busy. I noticed today that the graphics for this button are different from one of my PC's to another. My laptop, which is running 2016.0, looks like the original. My desktop which is running 2016.0f2 looks like the 'different' one. Both 2016 but one has no depth graphic and one does. I've dug through the menus folder and it looks like the classic and modern controls are not housed there, which makes sense really since they are probably proprietary. I'd prefer the original look as it matches with the flat Windows 10 style that's in vogue right now. Just something I noticed and thought was worth mentioning. Cheers, Tim
  7. Hello Everyone, A couple CLA summits ago (and I think even at an NI week or two) one of the NI R&D guys demo'd a software library that used variant attributes to create a lookup table. It was a class based interface that could be swapped into an actual database later if needed. I'm in the process of developing an application that would really be able to use this concept. I remember sitting in on this discussion but I don't have any notes on the name of the project or where to find it. I did some digging on NI.com and LVPM but came up with nothing so far. Does anyone on here remember that discussion and perhaps know what I'm talking about? TL/DR - A couple years ago an NI R&D engineer demo'd a lookup table that was class based and ran on the variant attribute engine in the background. Trying to remember what it was called, looking for tips. Thanks, Tim
  8. Hey everyone! I'm working on a test system right now that requires the operators to sign the test reports. In the previous generation this was done by the print/sign/scan method. During one of the meetings it was mentioned that getting around this requirement would be nice. I recommended we look into a digital signature pad and see what would be required to integrate one. I've been thinking about ordering one and just giving it a go but I thought first I'd ask and see who has done this with LabVIEW before. I know someone has, I just haven't found the documentation online yet. Here's how I expect it would go: 1. The software prompts the user to sign at the end of the test. 2. The signature pad saves the image location to the hard drive or provides it to the client through an API (any experience on how this usually works is appreciated) 3. My software would aquire the image and save it to a named range in Excel using the report gen toolkit. Currently my report writing tool of choice. 4... Profit! Does this theory match with reality? What are your experiences? Do you have any models you prefer to work with? I dug for a few minutes on this and didn't come up with much so perhaps a discussion on the subject is valuable. Thanks for the help! Tim
  9. What about stripping the VI of it's borders during runtime? This is what I've done and it's worked well. If the VI is just a box during runtime you'll end up with that nice splash screen look... The attached shows how I set up my splash screens and that as worked for me in 2015 and 2016
  10. Using 2016. Checked the strict Type Def idea, nope, using regular type def. Good idea though! I've tried to reproduce it a bit today but haven't had much luck. Yay for non-repeatable failures!
  11. Exactly! This was my thought too, and the exact reason I add at the end of an enum if at all possible. Going to work on it some more today and see if I can figure out why.
  12. Hey everyone, I'm working on a midsize (Actor framework) project that has a main command message that gets used by all of my nested actors. This command messages payload is a typedef enum that defines what job gets done. As I've been developing the project I've been adding values to the enum and handling the resulting job in the 'send command.vi' message.... This works well because all of my nested actor messages that don't have a payload just send a constant enum to the controller to request a job... so I'm adding a new function to the said enum and after editing the typedef LabVIEW resets ALL constant instances of this typedef to default across my entire project. Every instance in memory of this enum gets set back to the zero value. I have everything backed up in SVN so this isn't a big deal from a time standpoint for me but I don't think this is intended behavior. Has anyone else seen this? TL/DR - I have a typedef enumeration and after adding a value to the end of the enum it resets ALL constant instances of this enum to value 0... ideas? Thanks, Tim
  13. How does purging the mutation history work? Any white papers you could link to on this? Thanks,
  14. I forget sometimes how easy it is to test a function or behavior in labVIEW. Ctrl+N and throw a few VIs on the block diagram and see what it does. So very powerful for sure!
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.