Jump to content

A Scottish moose

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by A Scottish moose

  1. Hey everyone! I've looked through google and the lavag forums for this problem and haven't found much so I thought I'd ask and see if anyone had some ideas. Problem: I have an executable that is launching applets from several different PPLs. I am defining what PPL should be loaded by passing an INI file as command line arguments and also providing any other important start up information in that file. I've been trying to also force the icon on the application to set so that, depending on what the user is doing, the icon matches the process that I'm presenting them. Originally I thought this to be rather trivial as LabVIEW provides an invoke node to set the icon but after attempting it (and then checking the documentation) I realized that it is not available at run-time, (also it doesn't support .ico files according to the docs) http://zone.ni.com/reference/en-XX/help/371361R-01/lvprop/vi_set_vi_icon/ My hope is that I can force the application icon on the start bar and the title bar to match the .ico file that I'm providing to LabVIEW from the ini file. Any thoughts on this? I assumed it would be rather simple, just a function call to the right property/invoke/activex node but I've not come up with much at this point Thanks for your help, Tim
  2. Hello everyone, TL;DR - Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? I've got some library code that would be valuable but the project itself doesn't justify AF. I'm brainstorming ideas for a tester that I'll be building over the next few months. The project that I'm currently winding down is an Actor Framework test system that has come out really nice. Part of the project was an actor built to handle all of the DAQ and digital control. The main actor spins it off an tells it when and what tasks to launch. It send back data and confirms commands, uses Dynamic events to keep up with the generated signals, and uses actor tasks to ship the data back to the controller. Works amazing. Nothing revolutionary for sure but very handy. This next project doesn't really need actor framework, it's much smaller and has a lot smaller list of requirements. That being said I'm curious about integrating my DAQ actor (Dactor, because who can resist an amalgamation?!) into the project. Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? Is this even possible based on how the AF tree is designed to work? Thanks for reading! Tim
  3. Anyone know of a way you can do a ctr+drag on the block diagram (adding or removing space) that only impacts the current structure? I find myself wondering if this functionality exists often enough I thought I'd ask. Basically if you have a for loop inside a case structure is there a way to do a drag inside the for loop and grow that structure without impacting the case structure size or position? Thanks, Tim
  4. Yes. They are... however this was out of the classic palette. Perhaps you've taught me something new. I was under the impression that only system dependent controls are in the systems palette but I bet you are correct. The system button in the classic palette is OS dependent. Good catch Don't listen to my previous posts then because my conclusion was incorrect. Also thank you for your observation. Tim
  5. Update: I was able to replicate the original look with a 'Flat Square Button' and some color matching in about 5 or so minutes. Just be aware that if you move to sp1 of 2016 some of those graphics will change on you. Replicating them isn't a big effort however. Cheers, Tim
  6. Hello Everyone, I noticed an interesting change today that I thought was worth mentioning. I've been using the classic system button for some of my UI's. It's flat and looks clean. Less busy. I noticed today that the graphics for this button are different from one of my PC's to another. My laptop, which is running 2016.0, looks like the original. My desktop which is running 2016.0f2 looks like the 'different' one. Both 2016 but one has no depth graphic and one does. I've dug through the menus folder and it looks like the classic and modern controls are not housed there, which makes sense really since they are probably proprietary. I'd prefer the original look as it matches with the flat Windows 10 style that's in vogue right now. Just something I noticed and thought was worth mentioning. Cheers, Tim
  7. Hello Everyone, A couple CLA summits ago (and I think even at an NI week or two) one of the NI R&D guys demo'd a software library that used variant attributes to create a lookup table. It was a class based interface that could be swapped into an actual database later if needed. I'm in the process of developing an application that would really be able to use this concept. I remember sitting in on this discussion but I don't have any notes on the name of the project or where to find it. I did some digging on NI.com and LVPM but came up with nothing so far. Does anyone on here remember that discussion and perhaps know what I'm talking about? TL/DR - A couple years ago an NI R&D engineer demo'd a lookup table that was class based and ran on the variant attribute engine in the background. Trying to remember what it was called, looking for tips. Thanks, Tim
  8. Hey everyone! I'm working on a test system right now that requires the operators to sign the test reports. In the previous generation this was done by the print/sign/scan method. During one of the meetings it was mentioned that getting around this requirement would be nice. I recommended we look into a digital signature pad and see what would be required to integrate one. I've been thinking about ordering one and just giving it a go but I thought first I'd ask and see who has done this with LabVIEW before. I know someone has, I just haven't found the documentation online yet. Here's how I expect it would go: 1. The software prompts the user to sign at the end of the test. 2. The signature pad saves the image location to the hard drive or provides it to the client through an API (any experience on how this usually works is appreciated) 3. My software would aquire the image and save it to a named range in Excel using the report gen toolkit. Currently my report writing tool of choice. 4... Profit! Does this theory match with reality? What are your experiences? Do you have any models you prefer to work with? I dug for a few minutes on this and didn't come up with much so perhaps a discussion on the subject is valuable. Thanks for the help! Tim
  9. What about stripping the VI of it's borders during runtime? This is what I've done and it's worked well. If the VI is just a box during runtime you'll end up with that nice splash screen look... The attached shows how I set up my splash screens and that as worked for me in 2015 and 2016
  10. Using 2016. Checked the strict Type Def idea, nope, using regular type def. Good idea though! I've tried to reproduce it a bit today but haven't had much luck. Yay for non-repeatable failures!
  11. Exactly! This was my thought too, and the exact reason I add at the end of an enum if at all possible. Going to work on it some more today and see if I can figure out why.
  12. Hey everyone, I'm working on a midsize (Actor framework) project that has a main command message that gets used by all of my nested actors. This command messages payload is a typedef enum that defines what job gets done. As I've been developing the project I've been adding values to the enum and handling the resulting job in the 'send command.vi' message.... This works well because all of my nested actor messages that don't have a payload just send a constant enum to the controller to request a job... so I'm adding a new function to the said enum and after editing the typedef LabVIEW resets ALL constant instances of this typedef to default across my entire project. Every instance in memory of this enum gets set back to the zero value. I have everything backed up in SVN so this isn't a big deal from a time standpoint for me but I don't think this is intended behavior. Has anyone else seen this? TL/DR - I have a typedef enumeration and after adding a value to the end of the enum it resets ALL constant instances of this enum to value 0... ideas? Thanks, Tim
  13. How does purging the mutation history work? Any white papers you could link to on this? Thanks,
  14. I forget sometimes how easy it is to test a function or behavior in labVIEW. Ctrl+N and throw a few VIs on the block diagram and see what it does. So very powerful for sure!
  15. It looks like those drivers haven't been updated in a while. 7.1. That's before my time... They uses the IVI standard, which is fine, but it will require you to install the IVI driver as seen in the readme.... In my opinion that's about 1 step away from abandon-ware unless they give you some installation examples. The IVI requirement really drives your options here, you'll need to set up a driver session and go through that process under Max. I am not of fan of IVI, and it's not because of lack of experience. Once you get your IVI driver installed for the Thermotron you'll see it listed in the driver sessions under MAX. Then you'll create a logical name for your device and connect it to the driver with the physical connection information (IP address). Honestly this is a LOT of work to just get a temperature update. See if you can get away with the basic GPIB route and avoid using that driver. I think that will be way easier. Cheers, Tim
  16. Either option ends up being pretty easy, although I would say the GPIB route would be more simplistic with VISA. Does your device provide any midlevel drivers for output? Looks like the Thermotrons are soft panels, so I would expect those tools would be provided. I'm sure either way the process will be simple. If you have a TCP driver that abstracts out the commands and packet parsing for you go that route for sure. Otherwise GPIB is a pretty quick up and run with VISA commands. I've done both and most basic drivers come together in less than a day. Cheers, Tim
  17. I might start looking into more JSON based solutions. Thanks for the tip
  18. Hey everyone, I am working on a backup function for a test executive. The backup uses the 'class to XML' vi to create an XML string and then save it to a file to be reloaded later. All of my test specific information lives in the class (or one of it's children). I like this functionality because it makes backup and reload brainless. It just works.... until now... I've got a test class for my current tester that's grown rather large. Everything works fine, until the tester loads some waveform data into either of the waveform arrays. Without data in this field the class reloads just fine, otherwise if fails and says the XML is corrupted. As you can see in my backup vi I have built in a work around that flattens the waveform arrays to strings, drops them back into the class private data, deletes the waveform arrays and then writes the class. This works! Much to my surprise both waveform data and the rest of the class data are written to file and reloaded without error. Does anyone have any knowledge or experience with this? Cheers, Tim
  19. On the topics of edit time performance.... Over the last month I've been developing a test system that is now around 650 custom Vis in memory at one time. I started to notice a quickly increasing load on windows from LabVIEW towards the end of the project that disappeared after removing the SVN toolkit. I'm sure it depends on the processing power of your machine but I find that the 500+ VI projects start to see some major impact from the Viewpoint add on. It's a fantastic tool, but that's the trade off. Cheers, Tim
  20. I did a project recently that had quite a few large pictures that I convert to Pixmap during initialization. The images were static throughout the program so there wasn't a lot of access to this function, just at the beginning. I have not seen this problem with my program. Do you do this often during your execution perhaps frequency has an impact? I haven't seen this issue in my case. Hope that information is helpful, Cheers, Tim Edit: I use 2016 SP1
  21. Is this a simple installer package? Do you have any add-on installers in your installer file? Might be worth checking the 'additional installers' page and see if there's something checked there that might contribute. If so try removing all additional installers and retrying the install to see if it'll take.
  22. This is an interesting point. I haven't worked in an agile environment yet. As I've started to get some projects under my belt I can see how it would be valuable and how it would help to avoid some of those traditional pitfalls. Agile also forces an interaction more often (than a waterfall or gated model) between developer and customer and reduces the 'please let this work so I don't look like an idiot' situations. Or perhaps it just makes them lower stakes? Either way I think it makes the process better for all parties involved. Thanks for the thought!
  23. Currently working on the last few features of a tester and had an idea to ask about projects that either took way longer than they should have or when you reached burnout and still had tons of implementation and testing work to be completed. What's the longest project you've worked on or projects where you hit burnout way before the end. How did you keep up motivation and keep slogging through it to completion? Might be interesting to hear peoples thoughts on getting through those long projects or the last few weeks/months of a project when things seem to take forever. Cheers, Tim
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.