Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Bryan

  1. I just found out this morning that "Runstate.Execution.TSDatabaseLoggingDatalink{GUID}" isn't a valid location for TestStand 2010 (and not sure what other versions).  For 2010 at least, it is "Runstate.Execution.TSDatabaseLoggingDatalink{Database Schema Name}".

    After a lot of futzing around yesterday, I simply added a statement expression step after "Log to Database" in the "Log to Database" callback sequence.  Note: I didn't want to create a new local variable in the sequences of each tester, so I used Step.Result.Status to temporarily hold my value (Probably not a good practice, haha):

    Expression that works for TestStand 2010 (Not sure about other versions) (Using Parameters.DatabaseOptions.DatabaseSchema.Name and replacing all "\s" with "_":

    // Assign the GUID value to our step status for temporary local variable (Evaluate won't work unless this is done for some reason)
    Step.Result.status = Parameters.DatabaseOptions.DatabaseSchema.Name,
    // Replace all "-" with "_" in the GUID and build path to datalink object... then set it's value to NOTHING (Destroys DB reference forcing TestStand to re-open it next UUT run.)
    Evaluate("RunState.Execution.TSDatabaseLoggingDatalink" + SearchAndReplace(Step.Result.Status," ","_") + "= Nothing" ),
    // Update  step status to what it would normally be if we hadn't had to use it as a temporary local variable.
    Step.Result.Status = "Done"


    Expression that works for at LEAST TestStand 2013 and 2016 (Using Parameters.ModelPlugin.Base.GUID and replacing all "-" with "_"):

    // Assign the GUID value to our step status for temporary local variable (Evaluate won't work unless this is done for some reason)
    Step.Result.status = Parameters.ModelPlugin.Base.GUID,
    // Replace all "\s" with "_" in the GUID and build path to datalink object... then set it's value to NOTHING (Destroys DB reference forcing TestStand to re-open it next UUT run.)
    Evaluate("RunState.Execution.TSDatabaseLoggingDatalink" + SearchAndReplace(Step.Result.Status,"-","_") + "= Nothing" ),
    // Update  step status to what it would normally be if we hadn't had to use it as a temporary local variable.
    Step.Result.Status = "Done"


    Edit: I found today that one of our TS2016 installations used the same method as I had posted in the TS2010 verison above.  The DatabaseSchema.Name property had a combination of periods, spaces and dashes.  All of these I had to do a "SearchAndReplace" with underscores.  I'm not currently sure what's causing the inconsistencies between versions (using the GUID vs Schema Name), but so far I've had to tweak each one on a case by case basis. 

  2. I figured I would post a resolution to my own problem for anyone who may run into the same thing. 

    While searching for something unrelated today, I stumbled across my answer.  TestStand creates a connection to the configured database with the first UUT that is tested and maintains that connection while UUTs are continuously tested.  Only when the sequence is stopped is when the database connection is released.  I was able to come up with my solution by essentially compiling what I found in my searches.

    In my situation - "hiccups" in network connectivity or with the SQL server cause that connection to become invalid.  This is why we get our database logging errors and lose logged data.  I had read about the offline processing utility, but I wanted something simpler such as a LabVIEW test step that I could drop into a sequence that would handle things gracefully instead of setting up offline processing configurations on all of the testers. 

    After the first time the "Log to Database" callback is run, the datalink object becomes available at: "Runstate.Execution.TSDatabaseLoggingDatalink{GUID}".  (GUID will vary).  

    My solution (in my testbed so far) is that I created a VI that locates that object and writes a null value to it. (Empty variant constant).  This appears to force TestStand to re-establish the database connection when each subsequent UUT is tested.  This makes sure that when the test sequence sits in limbo for hours or days (plenty of time for DB/Network "hiccups"), that the next time a UUT is run, the data will be logged - assuming the DB and Network are up and running when called.  I have yet to put this method to the test on one of the actual testers to validate it, but it seems promising in my "simulated environment", (we all know how that sometimes goes).  I should also mention that this should go at the end of the "Log To Database" sequence callback.

    I wish I could share the VI that I created, but I'm not sure of my company's policy in doing so, but my description above should be enough to get people on the right path.

    • Like 1
    • Thanks 1
  3. TestStand Version(s) Used: 2010 thru 2016
    Windows (7 & 10)
    Database: MS SQL Server (v?)

    Note: The database connection I'm referring to is what's configured in "Configure > Result Processing", (or equivalent location in older versions).

    Based on some issues we've been having with nearly all of our TestStand-based production testers, I'm assuming that TestStand opens the configured database connection when the sequence is run, and maintains that same connection for all subsequent UUTs tested until the sequence is stopped/terminated/aborted.  However, I'm not sure of this and would like someone to either confirm or correct this assumption. 

    The problem we're having is that: Nearly all of our TestStand-based production testers have intermittently been losing their database connections - returning an error (usually after the PASS/FAIL banner).  I'm not sure if it's a TestStand issue or an issue with the database itself. The operator - Seeing and only caring about whether it passed or failed, often ignores the error message and soldiers on, mostly ignoring every error message that pops up.  Testers at the next higher assembly that look for a passed record of the sub assemblies' serial number in the database will now fail their test because they can't find a passed record of the serial number. 

    We've tried communicating with the operators to either let us know when the error occurs, re-test the UUT, or restart TestStand (usually the option that works), but it's often forgotten or ignored.

    The operators do not stop the test sequence when they go home for the evening/weekend/etc. so, TestStand is normally left waiting for them to enter the next serial number of the device to test.  I'm assuming that their connection to the database is still opened during this time.  If so, it's almost as though MS SQL has been configured to terminate idle connections to it, or if something happens with the SQL Server - the connection hasn't been properly closed or re-established, etc. 

    Our LabVIEW based testers don't appear to have this problem unless there really is an issue with the database server.  The majority of these testers I believe open > write > close their database connections at the completion of a unit test. 

    I'm currently looking into writing my own routine to be called in the "Log to Database" callback which will open > write > close the database connection.  But, I wanted to check if anyone more knowledgeable had any insight before I spend time doing something that may have an easy fix.

    Thanks all!

  4. On 5/6/2013 at 4:59 PM, hooovahh said:

    I've seen people add LabVIEW to their resume with less experience then that.

    Well to be fair, some companies have done the same thing to their employees who have never claimed any experience with a particular item/software/etc..  I've worked a couple of places where it has been: "Oh, you double-clicked the icon for "X" application?  You're now the resident EXPERT!". 

    Not too long ago, we had a "Kaizen" event where we had moved a piece of equipment in a production cell.  I knew nothing about said piece of equipment but it was Windows-Based, so I figured I would go ahead and reconnect the keyboard, mouse, monitor and power it up just to make sure that it still booted after the move.  A couple of people had seen me do this and now every time there's an issue with it, I'm called to look into it. 

    (Sorry to go OT)

  5. @pawhan11 That sounds exactly how SVN was set up where I currently work - Including having LabVIEW ISOs/Installers stored within. (I wonder if we work at the same company.)

    One of the drawbacks you've mentioned and I've found with SVN is the ability to search for directories/files/etc.  There are tools out there I believe to do this, but, since I'm not an administrator, I can't implement anything myself.  Something I've done is use the SVN commands to generate a text file listing of ALL of the SVN contents... then if I need to search for something, I use Notepad or Notepad++ to search for key words in the file.  Once found, I then I have the mapping to find it in the SVN Repository. 

    Something I would want to do if it were me - would be to set up a new repository (or reconfigure the current - if able) to use the proper multi-project structure that SVN recommends. 

  6. Our logging routines are home grown.  Of course, we're not logging EVERYTHING, mostly just overall test results and not each bit of data collected from a UUT.  Our older testers log production pass/fail data to CSV files (older testers) or the company's MS SQL database.  

    I've wanted to use SQLite for local logging, but the push back that I got was that it adds a dependency - having to create or install a viewer to access the logged data.  Most of the other Engineers that would be accessing that data want something that's ASCII based and easy to read and import into other formats that wouldn't require anything other than common applications to access. (E.g. Excel, Notepad, etc.).  They like the KISS method.

  7. I haven't used many VCS systems aside from the very few variants that have existed at my current and previous employers where I had access for LabVIEW development.  Since some of those places didn't view LabVIEW as a programming language, I wasn't often granted permissions to have access to their VCS systems.  So, my experience is limited.

    I spent a lot of time in Visual Source Save, but that's not going to help you here.  However, I was able to grasp SVN + TortoiseSVN pretty quickly.  I've also (for a very short period of time) had a little experience with Trac, though not enough to be a resource for knowledge on it. 

    Additionally, I recall that VisualSVN Server was easy to install and manage an SVN server.

    I've tried to grasp Git - but having been using TortoiseSVN + SVN for so long that I struggled to quickly grasp Git, so I abandoned it.

  8. I was able to find a Declaration of Conformity dated some time in 2011, and that it's a 12-bit digitizer via a  Google search, so it shouldn't be that old.  You may have to get into contact with NI to find out more on this card. 

    Unless the card was such a flop for NI that they chose to erase all evidence of it's existence - never to be spoken of again. 

  9. 3 hours ago, Grv Chy said:

    Thank you Bryan for your suggetion. Do you know any example of sharing data with external PC connected via LAN cable. I am not able to figure out how to share data and save my .csv file directly in a external PC(containing no LabVIEW). I would be very thankful.

    The Internet is loaded with examples of file sharing, so I don't really want to reinvent the wheel by writing up a tutorial.  A Google search will yield tons of tutorials and examples and you can even search based on your computers' operating systems.  

    For W10, Microsoft has the following article: https://support.microsoft.com/en-us/help/4092694/windows-10-file-sharing-over-a-network

  10. Are you meaning a *.txt file generated from a LabVIEW program saved onto a remote PC?  You should be able to set up a shared folder with appropriate permissions on your remote PC and have LabVIEW save the file to the share.  I won't get into the details since there's gobs of info on the internet on how to set up shares. 

    Once it's set up, you can just have your LabVIEW program point to the path of the shared folder.  For example: If your remote PC name is "REMOTEHOST", you name the shared directory as "LabVIEWDATA", and your text file name is going to be "TestData.txt", the file path for your text file would be (on a Windows machine): "\\REMOTEHOST\LabVIEWDATA\TestData.txt".  You can optionally replace the "REMOTEHOST" with the IP address of the remote machine, but if it's a DHCP assigned address, it could change on you in the future. 

  11. I'm not sure that you're going to be able to (easily) do what you want to do without some advanced programming to automate/manipulate Microsoft Excel (for instance, using ActiveX or dotNET), even then - I'm not sure it's possible to do what you're trying to do as it involves Microshaft software, over which LabVIEW will have limited ability to control.

    I agree with 'crossrulz'.  Whomever is pushing this requirement may need to reevaluate the need for this functionality.

  12. There is a "Flush File" function in the "File I/O >> Adv File Funcs" menu that you can have execute after every write.  

    However, If you already have Excel open to view the CSV file, I don't believe that Excel will automatically update your spreadsheet contents every time the file changes if that is what you're trying to do. I haven't tried that myself. 

    If the above doesn't work, it may be that Excel doesn't constantly monitor the file and automatically update.  You could try using the "Refresh All" button in Excel under the "Data" menu, but you'd have to do this manually.  Additionally, you'd have to open your reference to the file with permissions configured in such a way that it doesn't block other applications from using the file if another already has it open.

    Someone more experienced in Excel and actively monitoring file contents may be able to provide more info.

  13. 52 minutes ago, drjdpowell said:

    Interesting quote from the ex-NI person posting on Reddit:

    We all know that they wouldn't be the first company to do that and certainly not the last.  It's happened in places I worked before where we were forced to use some obscure, and non-user friendly software applications that were not the industry norm for reasons unknown.  If I were to speculate, it was because someone in the upper echelon had been dazzled or were/knew someone who could financially gain by way of it.  These applications were, as was said, enterprise level agreements/suites/etc. and likely cost the company a lot of money both in purchasing/agreements as well as the time wasted for learning curves and inefficient usage.

    Before I joined my current company, "Thou shalt use LabVIEW" was the command from above.  Not all were pleased with the mandate at the time and some still aren't, but it's one of the top reasons that helped me to get a job there.

  14. We've been using these Scales by Sartorius.  They were Denver Instruments scales incorporated into Sartorius.  I don't recall the measurement resolutions, but I do know that they are extremely sensitive.  They're USB devices detected by Windows as a Serial COM port device and we use serial commands to interface them (e.g. NI VISA).

    However, we have had some that stop responding to serial commands over time.  Sartorius wasn't much help with the issue though.  In the end we ended up taking one apart and rerouting a ribbon cable or otherwise messing with cables to get it to communicate more reliably.  We do have another scale that we haven't been able to fix with that method that is just currently sitting on a shelf as it's warranty ran out and we just keep it for manual measurements.

  15. Okay, sorry... I thought maybe you were monitoring THEIR software command/response by some method.  When I said "duplicating", I thought you hadn't tried it with your code yet, it sounds like you have.  So you're communicating with the UPS now?  Good!

    I'm still wondering though if it's expecting hex values that would represent the ASCII characters for the ...03 02 18 00... and possibly the "9B".  

    I'd be interested to see if you get any responses for the following commands.  They are what I was trying to explain above (though, probably not explained well... I'm not always good with the words putting together thing):

    7F30 3330 3231 3830 3039 42


    7F30 3330 3231 3830 309B

  16. I wonder if it's a mix of hex and ascii somehow based on what crossrulz said - like the 7F is supposed to be a hex byte (non-printable character), but the remaining values (including possibly the checksum) are to be the ASCII representations of the numbers... for instance:

    7F 03 02 18 00 9B

    Could really be sent as:

    7F 30 33 30 32 31 38 30 30 39 42

    (Again, I'm not sure if the checksum would have to be ASCII or Hex... you could try it both ways. )


  17. Yeah, working from home can be a pain for those of us who require physical hardware in order to be effective at what we're doing.  We're doing somewhat of a 'rotation' of Test Engineers where I work, so on my days in the building, I can grab things to take home if needed... but ya always forget something. Haha!

    It would be great if you had access to a physical DB9 Serial connection on a computer to try.  I know that very few laptops come with them these days and we're forced to use those &%#@ adapters.  I know that some of the pins used by the dry contacts are used for other things with normal serial communication, which is why I'm confused that Tripp Lite says it'll work with serial - unless you have to have a custom cable of some kind. 

    Like I mentioned before though, I'd be tempted at this point to try using the USB connection on the UPS and install their driver.  Windows may be able to see, and allow you to treat it as a serial device via that method. 

    I hope you're able to figure out what's up.  If I think of anything else that could be useful I'll be sure to post it.  In the meantime, if you get it figured out, please keep us updated on what you found. 

  18. I just looked up the owner's manual for that UPS (https://www.tripplite.com/support/owners-manual/45875).  If you look at Pg 6, it looks like there are 2 DB9 connectors on the back... is yours like this?  (Stupid question: Are you plugged into the right one?).  Also, on Pg 12, it mentions "dry contact" connections under USB and RS-232.  It's kind of confusing, but it looks like it could possibly be using a mixture of contact closure and RS232 communications on one cable... this seems unusual (and I may be reading it wrong, I'm in a hurry), but if that's the case, connection of anything other than pins 2 and 3 to your computer could be causing issues.  What concerns me is that pin 5 is normally used for communication ground, but the documentation looks like it says that the contact closures are using those pins as well as pin 3.

    I'm wondering if the DB9 connections aren't intended for serial communication.  Some devices will show up when plugged in via USB as a Serial Device with a COM port assignment (may require a driver installation from them).  Perhaps that's what they mean by serial/rs-232?  If so, their documentation (that I've found anyway) isn't very helpful.

    I also just noticed that your baud rate in picture above is set to 2400, which isn't typically common on modern serial devices anymore.  9600 used to be the norm, but I've seen 19200 and 115200 commonly used on devices as well.  I've never interfaced one of Tripp_Lite's UPSs before though.

  19. I work with those CURSED USB to RS232 adapters nearly every day and have had a whole range of different issues with them.  I won't get into it all here, but here are the troubleshooting steps that I typically do in no particular order:

    1. Jumper the Tx and Rx pins on the connector closest to your device to form a loop back (pins 2 and 3). 
    2. Use PuTTY or some other method to send data out, you should get the exact same data back.  If you get data back, then you're communication is making it round-trip to at LEAST the connector of your device.
    3. Try a null-modem cable/adapter, which swaps the Tx and Rx pins between one end of the cable to the other.  Sometimes manufacturers don't make it easy to figure out if it's required.
    4. Double-check to make sure your Baud rate, data bits, stop bits, parity and handshaking have been configured to match what the UPS is expecting.  It looks like you're specifying a termination character for READ (For the initialization, 0x0A), but not enabling it.  If the UPS requires a termination character, you'll have to explicitly send one with writes, it doesn't automatically append.  If the UPS is expecting it, you may have to add an "0A" to the end of your hex string.
    5. Look in the documentation to find out if the UPS requires any special termination characters, start/stop, etc characters - I.e. is what you're sending properly formatted?  I've had devices which required unusual starting and termination characters before, with or without the common "0x0A termination character.
    6. Your VISA Open timeout is zero, try adding something a little longer... possibly 50ms to 250ms or so.

    • Like 1
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.