joshxdr Posted March 10, 2011 Report Share Posted March 10, 2011 I am using LV7.0 on SPARC Solaris. I have two executables that are reading and writing the same file. The file is a simple record of how many times a probecard has touched down on a wafer. I was not able to find a vi that can edit a file in place, so when the file needs to be edited, I delete the file and write a new one. This causes problems when I am running the program on two machines at the same time. Once in a while, one of the machines will try to access the file during the brief period that the file is already deleted. I suppose I could put the error into a case structure and retry until the file is present again. Is there a vi that I am missing that can do in-place edits and removes this collision problem? Quote Link to comment
Mr Mike Posted March 10, 2011 Report Share Posted March 10, 2011 I think what you're describing is opening files for write. I'm 99.99% sure LabVIEW 7.0 can do this. Place an Open/Create//Replace File node on your diagram (in the File I/O palette). The third input on the left should be an access specifier: read/write, read-only, write-only. However, the overall problem you're describing is a race condition. Race conditions are bad. Newer versions of LabVIEW (I've only recently used 2009 and above) have network variables, which would probably work for you and immune to race conditions in the situation you're describing*. It allows you to transmit data between two applications on a network. *You can still create a race condition with network variables, but if one is only reading the data and one is only writing the data, there's nothing to worry about. Quote Link to comment
joshxdr Posted March 10, 2011 Author Report Share Posted March 10, 2011 I think what you're describing is opening files for write. I'm 99.99% sure LabVIEW 7.0 can do this. Place an Open/Create//Replace File node on your diagram (in the File I/O palette). The third input on the left should be an access specifier: read/write, read-only, write-only. However, the overall problem you're describing is a race condition. Race conditions are bad. Newer versions of LabVIEW (I've only recently used 2009 and above) have network variables, which would probably work for you and immune to race conditions in the situation you're describing*. It allows you to transmit data between two applications on a network. *You can still create a race condition with network variables, but if one is only reading the data and one is only writing the data, there's nothing to worry about. I recall trying to open for write, but I had a problems. Either I had a permissions issue (we are on a Unix system) or I only had an append option and not a replace option. I will recheck. From the Unix command line, I can use utilities like sed or perl to perform in-place modifications to files. I assume these utilities are immune to contention issues, or are they....? I assume the problem where multiple programs read and write the same file is a common one in the software world. I would like to use an industry standard type of solution rather than a kludge. Network variables are a LabVIEW-specific kind of solution, I would be more comfortable with one that is more standard. Quote Link to comment
Mr Mike Posted March 10, 2011 Report Share Posted March 10, 2011 The issue that you may have run into was file locking. I don't know what LabVIEW does in terms of file locking on Unix-based systems. The text utilities you used were probably no more immune to it than anything else, it just happened that you didn't run into problems with them. I don't know of an industry standard way of having two processes access the same file (other than try not to do it). I'd say your best bet is to just keep trying for a few seconds until the file is openable, but someone else might have a better suggestion. Quote Link to comment
joshxdr Posted March 10, 2011 Author Report Share Posted March 10, 2011 I did a little research and it appears that this is not as simple as I thought. Serializing access to a shared resource seems to be a potential trouble spot for any kind of software. The only way to guarantee that a race condition does not occur is for process A to lock the file, and for process B to wait for the lock to be released. By accident I have created a poor-man's file lock by deleting the original file after reading, although there is still a narrow window where a race condition can still occur. Does anyone know if there is a "file lock" vi and a "wait for lock" vi? Can I do this from the command line in Unix using the system exec vi? Ok, here is an idea I got from the internet: Check for the existence of sharedfile.lock If sharedfile.lock exists, wait 10ms, go back to step 1 Create sharedfile.lock Open sharedfile.txt Overwrite sharedfile.txt with new value Delete sharedfile.lock Does this sound like it would work? Quote Link to comment
ned Posted March 10, 2011 Report Share Posted March 10, 2011 Is there a vi that I am missing that can do in-place edits and removes this collision problem? Depends on what you mean by an "in-place edit." The standard file functions can overwrite portions of a file. It's been a while since I used 7.0 and some of the file functions have changed, but I think you want the Seek function (under the File palette -> Advanced) which lets you set the access position in the file. When you then start writing to the file the data will go at that location, overwriting the existing data (any remaining data past the end of your write will be unchanged). There is, however, no way to insert new data in the middle of a file without rewriting everything past the insertion point. Quote Link to comment
ShaunR Posted March 10, 2011 Report Share Posted March 10, 2011 I did a little research and it appears that this is not as simple as I thought. Serializing access to a shared resource seems to be a potential trouble spot for any kind of software. The only way to guarantee that a race condition does not occur is for process A to lock the file, and for process B to wait for the lock to be released. By accident I have created a poor-man's file lock by deleting the original file after reading, although there is still a narrow window where a race condition can still occur. Does anyone know if there is a "file lock" vi and a "wait for lock" vi? Can I do this from the command line in Unix using the system exec vi? Ok, here is an idea I got from the internet: Check for the existence of sharedfile.lock If sharedfile.lock exists, wait 10ms, go back to step 1 Create sharedfile.lock Open sharedfile.txt Overwrite sharedfile.txt with new value Delete sharedfile.lock Does this sound like it would work? Why not just use the "Deny Access" vi (should be in LV7.0 - under file IO>>advanced file functions). Quote Link to comment
Mr Mike Posted March 10, 2011 Report Share Posted March 10, 2011 Check for the existence of sharedfile.lock If sharedfile.lock exists, wait 10ms, go back to step 1 Create sharedfile.lock Open sharedfile.txt Overwrite sharedfile.txt with new value Delete sharedfile.lock Does this sound like it would work? Maybe. If process A and B check for sharedfile.lock at the same time and it doesn't exist, both will try to create the file and one will fail (one will create it first, the other will try to create it and fine there's already a lock file). In this case, the failed one should go back to step 1. I think your best bet is to use the locking that you seem to be encountering to your advantage: if you try to open the file and it fails, try again until it doesn't fail. I did a little research and it appears that this is not as simple as I thought. Serializing access to a shared resource seems to be a potential trouble spot for any kind of software. Yes, it's a frequent problem. In the same application, there are a number of tools you can use to ensure safe use of a shared resource. Outside, it's a lot harder. (Typically the resource itself will be responsible for that: e.g. twitter displays the fail whale often) The only way to guarantee that a race condition does not occur is for process A to lock the file, and for process B to wait for the lock to be released. Yes, and it's hard (or maybe impossible?) to create your own lock with a separate file because of the timing issue I discussed above. Basically any time there is more than one step to open and lock a resource there's the potential for a race condition like the one we're talking about here. Does anyone know if there is a "file lock" vi and a "wait for lock" vi? Can I do this from the command line in Unix using the system exec vi? As far as I know, the answers are maybe and no. Check out Deny Access in the File I/O -> Advanced palette. I've never used it, but it may do file locking? Maybe? Play around with it and see if it gets you what you need. Quote Link to comment
joshxdr Posted March 10, 2011 Author Report Share Posted March 10, 2011 I have LV7.0 on Solaris and LV2010 on Linux, and I used them both to check out what I can do to lock files. For LV7.0, there is a "deny access" input on file open, but it does not do anything, perhaps it is only functional for windows. For LV7.0 there is a separate "deny access" vi, and it does work. Unfortunately I need this solution to work on the Solaris system, and LV7.0 does not have a dedicated "deny access" vi. One thing that worked for LV7.0 was to use the "access rights" vi, which modifies the unix permissions on the file. Setting permissions to "0" prevents the other process from accessing it. This is not a perfect solution, since I need to "open" and then "read" before I lock the file. But it is better than nothing. Quote Link to comment
joshxdr Posted March 10, 2011 Author Report Share Posted March 10, 2011 I decided to cut my losses and live with imperfect counting for now. If I miss one or two touchdown counts out of a thousand I don't really care. It turned out to be somewhat painful to replace the contents of a file. I had to find the length of the new string, manually move the EOF byte to the new string length, then write the new data from the start of the file. Quote Link to comment
ShaunR Posted March 10, 2011 Report Share Posted March 10, 2011 I decided to cut my losses and live with imperfect counting for now. If I miss one or two touchdown counts out of a thousand I don't really care. It turned out to be somewhat painful to replace the contents of a file. I had to find the length of the new string, manually move the EOF byte to the new string length, then write the new data from the start of the file. The only real (practical) way out of this scenario is to use a database which handles locking,delayed writes and concurrency for you. Thats why websites run off databases and not file systems. Quote Link to comment
joshxdr Posted March 10, 2011 Author Report Share Posted March 10, 2011 Fixed typo: I have LV7.0 on Solaris and LV2010 on Linux, and I used them both to check out what I can do to lock files. For LV7.0, there is a "deny access" input on file open, but it does not do anything, perhaps it is only functional for windows. For LV2010 there is a separate "deny access" vi, and it does work. Unfortunately I need this solution to work on the Solaris system, and LV7.0 does not have a dedicated "deny access" vi. One thing that worked for LV7.0 was to use the "access rights" vi, which modifies the unix permissions on the file. Setting permissions to "0" prevents the other process from accessing it. This is not a perfect solution, since I need to "open" and then "read" before I lock the file. But it is better than nothing. Quote Link to comment
smarlow Posted May 17, 2011 Report Share Posted May 17, 2011 Fixed typo: I have LV7.0 on Solaris and LV2010 on Linux, and I used them both to check out what I can do to lock files. For LV7.0, there is a "deny access" input on file open, but it does not do anything, perhaps it is only functional for windows. For LV2010 there is a separate "deny access" vi, and it does work. Unfortunately I need this solution to work on the Solaris system, and LV7.0 does not have a dedicated "deny access" vi. One thing that worked for LV7.0 was to use the "access rights" vi, which modifies the unix permissions on the file. Setting permissions to "0" prevents the other process from accessing it. This is not a perfect solution, since I need to "open" and then "read" before I lock the file. But it is better than nothing. Create a functional global VI for performing your file operations and call it directly to update your file in Executable #1. Call the same VI in Executable #2 using the VI server Call by Reference Node. This File I/O VI then becomes your shared resource, rather than the file, and the race condition/file locking issues should be eliminated. If all you need is a shared memory counter, you should be able to eliminate the file and just use shift register or feedback node in the functional global VI. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.