-
Posts
3,392 -
Joined
-
Last visited
-
Days Won
284
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by hooovahh
-
Reg. Centralized error handler.
hooovahh replied to Hemant Chourasia's topic in Certification and Training
In my own words I'd say a central error handler is a separate code module (ie. library, class, while loop, VIG) that is dedicated to capturing and reacting to errors generated in other code modules in the application. This allows for errors to be organized by their origin, time, or other factors. A few possible reactions to an error coming in can be to log it, notify the operator with an asynchronous dialog (allowing for the application to continue to try to operate), notify the administrator (email, SMS, etc), or shutdown if a specific error, or enough errors are seen (if your application sees 100 errors in one second it is safe to say you should probably just shutdown safely). -
Reg. Centralized error handler.
hooovahh replied to Hemant Chourasia's topic in Certification and Training
VI Engineering had a NI Week presentation and here are some discussions on it. Open discussions found on search. Stuff on the dark side: http://www.ni.com/example/31253/en/ https://forums.ni.com/t5/Reference-Design-Content/Structured-Error-Handler-SEH-Library/ta-p/3495302 https://forums.ni.com/t5/Actor-Framework-Discussions/Central-Error-Handler-in-the-Actor-Framework/td-p/3436974 https://forums.ni.com/t5/LabVIEW-Development-Best/Advanced-Error-Handling-in-LabVIEW/ba-p/3485995 -
Reg. Centralized error handler.
hooovahh replied to Hemant Chourasia's topic in Certification and Training
There's lots of topics on central error handlers, and multiple NI Week sessions have been dedicated to them. As for your implementation, it is up to you. A FGV can make for a fine central error handler if all you want to do is log all error from all locations, email someone on an error seen, or shutting down on specific errors. But of course if an error comes in at a code module, then it usually makes sense to try to handle that error as close to where it was generated first, and only letting unhandled errors get passed to an error handler. -
How do you organize your custom error code files?
hooovahh replied to A Scottish moose's topic in LabVIEW General
Interfacing with external tools is another interesting one, where the habit is to just do the 1-to-1. Lets say I'm interfacing with some 3rd party DLL, and on the return they will have a number from 1 to 200 with a table in their documentation stating what the error corresponds to. I've see times where people will just add a number to every value returned. I know the ADCS toolkit does something similar to this. The error code is actually returned as part of the communication. Here is documentation with a negative response table. NI just takes a constant of -8000 and subtracts whatever the response value is and uses that as the LabVIEW error code. This is quite handy because when you get an error code from LabVIEW it will state things like "Service Not Supported" which tells me about type of issue I'm having. These are the times of error codes that I can't really find an equivalent existing error code for.- 11 replies
-
- errors
- custom error codes
-
(and 1 more)
Tagged with:
-
How do you organize your custom error code files?
hooovahh replied to A Scottish moose's topic in LabVIEW General
I too am interested in hearing how people tackle this issue. Personally I think it starts with keeping the number of custom error codes to a minimum. I have a tool (that I can't seem to find online but thought I posted) which allows to search for errors that might match what you need. Then I use this as the error code and put in the custom text in an Error Ring with the optional inputs with %s. Of course one step can be eliminated if you could search in the error ring. I understand the benefit of custom errors, but the dependency issues, and code ranges usually means I just stick with the ones on the system.- 11 replies
-
- errors
- custom error codes
-
(and 1 more)
Tagged with:
-
I worked at a company that had all USB devices disabled. Not just USB memory devices, but also USB CAN, USB DAQ, USB Serial, USB GPIB, everything. For the DAQ stuff we would just simulate the hardware, then deploy it to test it. It was a pain but not too bad. The USB memory devices were easy to get around. It made the USB stick read-only, so to write to it you would map some empty folder like C:\USB to the E:\ drive or where ever your USB was. Then you could read and write to the C:\USB folder all you wanted which actually wrote to the USB device. I guess they just setup software to disable writing to drives that were marked as removable. That DgAgent sounds nasty.
-
I'm loving this thread, it is stirring up so many memories and stories I mostly forgot. I was at a customer's site that had to production lines going which a previous vendor had made using LabVIEW and TestStand. The company I was working for had been tasked with updating the systems by adding a few new features, and updating to the latest LabVIEW/TestStand versions as well as updating the OS. Luckily they sent the 3rd system to our facility to update and upgrade while they were still using the other two. We updated the system and made the changes they asked for, but there was still some onsite work required because we couldn't interface with their network databases until we were there. Once onsite I started setting up the 3rd stand right next to the other two that weren't upgraded yet. My first day there I started asking around about the credentials used to access the network and database that I needed to have my software log to. No one seemed to know and even the IT guys were confused about what I was talking about. So with the permission of the program manager I looked at the source on the 1st stand to see how it was logging to the databases. Sure enough there was the user name and password, and it was something like User:TestUser and Password:TestUserPassword. So I updated our code got it working and the customer was happy. I saw the IT guy walking around so I let him know. "Oh yeah I found the credentials used for the production systems it is the TestUser account." His eyes got really large and said "No one should be using that." And I'm like "Oh okay well we're just using the same credentials the last vendor was using and it can be changed later." He said okay and left. About 10 minutes later I noticed the other two production systems were having all kinds of problems, failing every part it tested. Just then the IT guy came up and said "By the way I deleted that user account so no one can use it." Luckily the program manager was in ear shot and started blaming him for causing the production lines to go down, and demanded he fixed it. So IT made a new account, with a new password and we used that. I'm not sure why he thought it was okay to delete that user when I mentioned how the production lines were using it, but he did and I was glad we didn't spend too much time trying to fix it. Oh and we also have a No Sleep program on our test systems which is an AutoIt EXE. As for updates we have the systems download updates but not install and then we just need to remember to reboot the machines when testing is finished which might not be for months. Every once in a while IT comes down and says there is some major new virus and we need to update all machines now, which is a pain but we haven't had any issues yet.
-
Use off the shelf software? Heavens no. Why don't we just create our own approval software that doesn't integrate with anything, requires extra training, is supported by no one that speaks English, is buggy, and charges a licensing cost calculated by number of clicks of the mouse used during the approval... Another story before I forget it. I was deploying a test system to a production facility. The system involved two GigE gigabit cameras, and was looking at the DUTs display to determine if it was working correctly. Everything worked great before we got to the facility, but then all the sudden the images from the ethernet camera would sometimes get black bars, obscuring the image to where the vision system would fail to work. We looked into all kinds of jumbo packet settings, Windows network settings, and just before going to buy a new network card I asked if anyone had used the machine and a worker mentioned someone from IT had used it. I talked to them and they installed network spying software to make sure it was locked down. Turns out it was also disrupting all network traffic, including the cameras. We asked if it could not sniff the two ethernet ports going to the cameras and they said the software didn't allow for that level of control. We tried to uninstall their software but it required a password during the uninstall, and IT wouldn't give it to us and wouldn't allow us to uninstall it. At this point the customer is getting increasingly frustrated with their own IT department, especially since it was costing time and money for me and others to be there when the system was crippled. We were still local administrators on the PC, so we just disabled the Windows services, and had their software not run on startup anymore and production was running again. Customer was happy the system was running, and IT didn't really care as long as it was installed on the PC
-
I made a new thread and started with a story that you reminded me of.
-
So based on conversations in another thread we've come to find several instances in our career where IT and corporate policies that mean well, get in the way or getting work done. Policies intended to keep IP from leaking, or viruses from taking over our PCs, and ransom-ware, tend to make software developers find work arounds so they can get their job done. So I thought we could make a thread where we share some of our IT horror stories, focusing on previous work for obvious reasons... One company I worked with had locked down the work laptops to where you couldn't really do anything other then answer emails and write Word documents. So when we needed to install LabVIEW which required administrator privileges, there was a whole approval process to get the IDE and some of the DAQ drivers installed. This included several levels of approvals, and justification for why you wanted to be an admin. You would apply to have administrator privileges given to you but you only had a 2 hour time slot, and after that the credentials you were given wouldn't work, and you had to apply again. Of course the first thing we would do in that 2 hours, is use the administrator account we were given to promote our user as a local administrator, so we never had to apply for it again. This company also had an app store for PC software. To me the concept of having a PC app store seemed odd. I sorta consider the whole internet the app store, and the concept that an offline app store can be up to date with all of the internet, with the newest software seemed silly.
-
Yes but I'm guessing the security concern is not stealing IP, but instead plugging in a random USB drive, that might have some malware, or rootkits on it. Of course this should be no different than putting in a DVD from said company since the USB mounts the same as an optical media, but I guess you wouldn't know that unless you plug it in. NI has been doing this for various bundles for at least a couple of years now. I personally like it just so I have one thing I need to copy to the network for network based installs, instead of having to copy 4 or 5 DVDs and then do something to merge them into a single folder so it doesn't prompt for the next disk. If you want to really discuss corporate policies getting in the way of getting your job done, or the efforts in reinventing the wheel for a large company, then I have lots to say. I'm not certain that we have the most restrictive policies in the industry, but I know we would be top contenders.
-
Great example and one I've forgotten about. So you have a couple of options (all of which sound like git to me anyway). You can start editing the code anyway even though you don't have the lock, and send that person an email saying you are editing some code they have locked. When they get in they will either say "Oh yeah sorry I didn't make any changes" to which you commit yours over theirs (breaking the lock), or they say they have made changes to which you'd need to merge the changes just like a git commit when you both made changes at the same time (am I wrong?). Or you can steal the lock while they are on vacation and force them to have the conversation with you when they get in and try to commit, at which point they again will either say they had pending changes, and a merge needs to happen, or they don't. Isn't it still just like git in that if only changes happen in one place at a time, that is what gets committed, and if there are multiple changes at once time, a merge needs to happen? Again I want to make it clear I don't use git, so my understanding might be incorrect but from an outsider I didn't see that as a compelling reason to change.
-
I've not used git, but I hear some of these pros/cons. The one I've heard is that git is sometimes better because it forces you to communicate with your developers to know what they are working on. But if this were a servers side SVN thing with locks you'd know what other people are working on because, it's locked. If you need to work on something that is locked, ask the person that locked it if they are done and can unlock it. I don't need to track down every developer and ask what they are working on or if I can work on a code module. If something is locked I want to work on I talk to that developer about it, if it isn't locked I'm free to do whatever. The server-less side of things do allow for you to have the whole code base at your disposal, but you could do something similar with SVN when you aren't connected to a server. You'll just have lots of things to resolve when you do get back online since there is a chance multiple people changed the same VI, just as you would with git. Since I'm in the office 99% of my development time having it be server dependent is a non-issue, and that other 1% of the time I either VPN in, or I'm going onsite to deploy a system and will typically lock the whole project before going out, or not lock it, and the changes I make while offsite will need to be merged when I get back. I was also confused by the overhead comments. Do you mean work flow overhead?
-
I know this is off topic but I do see this issue once in a while but on Chrome when this happens for me, it doesn't do anything and my text is still there, so I just copy it, refresh the page, and make a new post pasting it in. Are you saying you hit submit, the page refreshes, but your post isn't there? Not that I'm equipped to troubleshoot these types of issues.
- 9 replies
-
- labview
- certification
-
(and 1 more)
Tagged with:
-
Oh I'm sorry I'm was not familiar with a Signature Line. One thing that I find useful is to use the Excel feature of recording a Macro, then looking at the generated code, and then translate that into LabVIEW. The problem is I can't even create a signature line in a footer, in normal Excel. If I click into the footer, then the Insert >> Text is all greyed out. Still I enabled a macro recording, then recorded my operations, and added a signature line at a normal place in the worksheet and looked at the generated code and it was the following: ActiveWorkbook.Signatures.AddSignatureLine _ "{00000000-0000-0000-0000-000000000000}" Not sure what it means but I'm guessing that you can't automate the creation of that signature line using the VBA this way and that it was designed to have a dialog prompt you for the settings for some reason.
-
That makes me think that maybe your role is more of an architect than a developer. I didn't find the CLD very difficult personally. It was just a single loop QMH using arrays of strings, and I finished early and spent the extra time double checking my work. The CLA I barely passed, and worked up to the last minute.
- 9 replies
-
- labview
- certification
-
(and 1 more)
Tagged with:
-
Error accessing site when not logged in.
hooovahh replied to ShaunR's topic in Site Feedback & Support
I've contacted Michael. -
Storing/Importing DAQmx Task Configurations
hooovahh replied to A Scottish moose's topic in LabVIEW General
I use MAX as much as I can. I try to not reinvent the wheel unless there are limitations that I really need to work around, and MAX just does so much that I try to avoid custom things. I'll still incorporate MAX in my application, by doing things like calling test panels to hardware to open, or creating and editing channels, tasks, and scales using a VI I posted here. But as for the tasks and how they work in applications. I'll usually make the tasks in MAX and test them there, then perform an export into an NCE file which can go along with the source code which explains how the channels are all setup. Then in application builder, you can specify this NCE file to be imported when the installer runs. I do however remember some issues with editing tasks that have already been opened. Like I remember having my software opening it, and then trying to edit the scale of a channel and having issues where the new scale wouldn't be applied until I closed and reopened my software. I must not have been stopping or closing all of the references to the scale, or channel using the scale and that might not be a real limitation of DAQmx but something to look out for. I've had no experience with INIs or XML as a result. -
I made a suggestion for this on the Idea Exchange to give more font control over listboxes and tables. If you want this you should go vote there.
-
Personally I think this is a good idea, and NI already is doing something similar by tagging NXG posts so they can be identified. I do think there are some issues with making a new subforum. One is if we do make an NXG subforum, then what happens when it isn't called NXG anymore? I've heard that some day NI will just call it "LabVIEW" and the LabVIEW we all know and love might be called something else, like "Classic LabVIEW". But then again that might not be for many years until NXG has functional parity with LabVIEW. There are also going to be times when a subject is about NXG and Object Oriented Programming, which subforum should it go into? Now we'd say NXG, but in a few years would it make more sense to put it in OO? Anyway I'll talk to some admins and see if we can agree on something.
-
Attached is an example that does this using report generation toolkit, but does it through ActiveX calls. I think this needs to be ran on a workbook after all other writing has taken place, because it does it on a per worksheet view. For me I had N work sheets already made, some with data, and some with charts and graphs, then calling this would set the text and logo to the footer. Saved in 2015. Footer in Excel.vi
-
Tortoise SVN. I've used Mercurial, and Perforce too, but SVN is the one I'm most familiar with, and the setup was stupid simple. I only use LabVIEW tools for performing a rename in a project and SCC at once.
-
The NI Package Builder is less of a replacement for VIPM in my opinion, and more of a distribution method for installing software in general. NI may mature their package manager to one day replace VIPM, but as it is now the NIPM is not so much focused on installing LabVIEW packages, as it is focused on installing software. Now LabVIEW packages can be software (of course) but you won't find things like palette editing tools, running pre/post install VIs, or anything LabVIEW specific at the moment. It is more like an app store, where you can one day search for tools network packages, or larger packages like NI DAQmx. This is how NXG is currently distributed by installing the NIPM, and then choosing to install NXG and its dependent packages. There is also an offline installer but I suspect it is still just NIPM with the NXG packages as an offline repository. Of course this also opens up the possibility of making your own packages and you can have the "Hooovahh's Awesome Program" package which may depend on the NXG run-time, DAQmx, NI-DMM, and NI-VISA. Then these packages can be included in an offline repository that gets installed, or (potentially) an online repository so users of my software can update to newer versions of my awesome program, or newer version of NI's packages if dependencies allow it. Because of this I highly doubt it will ever replace VIPM for current generation LabVIEW needs. At the moment this just doesn't seem to be what the NIPM is made for.
-
Yeah I remember you, I don't remember what I was drinking but I remember you. Glad you had a good time at NI Week and the BBQ. That project looks very cool by the way.
-
VIMs are amazing and will cause revamping of polymorphic reuse libraries. I did a talk on this in August at NI Week which covers XNodes and the second half talks about VIMs and this structure which was named "Type Enabled Structure" unofficially in 2016 when it was introduced. http://forums.ni.com/t5/2016-Advanced-User-Track/TS9451-XNodes-Treasures-of-Reuse-in-LabVIEW-s-Attic/gpm-p/3538650 As others have said it works like a disabled diagram, where it goes through trying to enable each case one at a time and will enable the one that creates no broken wires. This can mean even more reuse if you can make your code more generic to accept scalars or arrays as it does in this case. The structure itself is not official in 2017 and has some editor issues. The impression I get is that it is stable (I mean it has to be NI is using it) but only after the code has been written and compiled. It still has odd cases where when you are first developing with it, the VIM won't adapt the way you want, but I've found it works just fine with code that is written and tested