Jump to content

TDMS Bug or Expected Behavior?


Recommended Posts

I am accessing a TDMS file from two VIs running in parallel.  One VI is responsible for opening the TDMS file and writing a property.  The second VI is responsible for reading said property.

 

Here's where things get interesting...The second VI cannot find the property while the first VI has the TDMS file open unless the first VI also writes data to the TDMS file.  

 

 

I've created a few VIs (attached) to help illustrate the problem.  Any ideas as to what is going on?  I would have expected properties to be available as soon as they are written.

 

Read TDMS Property Names.vi

Write TDMS Property (with data).vi

Write TDMS Property.vi

Link to comment

Okay here's the issue or what I believe is the issue.  TDMS may not write to disk immediately after a write or set properties.  Because of this opening a second reference to a file is really a race condition between the flush routine and the read.

 

The way to fix this is to only perform the TDMS open in one location, and then share that reference to the read, and write operations.  Don't open a second reference to the file.

Link to comment

Sharing a single reference would solve this particular problem.  It would also add more messaging overhead.  The process that closes the reference would have to notify the other processes and wait for acknowledges before it could safely close the reference.  Otherwise another process could attempt to access the TDMS file with a bad reference.  

 

 

One of the reasons I chose to use the TDMS file format was its out-of-the box support for concurrent access.  Perhaps properties writes don't work the same way as data writes do (as you hypothesize hooovahh)... ;)

Link to comment

Hi Tom,

 

Does it help if you call TDMS Flush.vi after you write the property?

 

 

Sharing a single reference would solve this particular problem.  It would also add more messaging overhead.  The process that closes the reference would have to notify the other processes and wait for acknowledges before it could safely close the reference.  Otherwise another process could attempt to access the TDMS file with a bad reference.  

 

To avoid race conditions, you'd need some form of messaging/synchronization anyway: You need to ensure that your Read VI waits for your Write VI to finish writing the property first, before it tries to read it.

Link to comment

Yeah I assume flush would fix this, but cause the index file to be larger than it could be.  I handle this by only having the open in one place and reading and writing all comes from that one actor, with messaging to other actors if they want to write, or read data.  I understand the concurrent references idea and that is an appealing feature, but race conditions like this can happen and sometimes that doesn't matter, but it sounds like in this case it does.

  • Like 1
Link to comment

Sharing a single reference would solve this particular problem.  It would also add more messaging overhead.  The process that closes the reference would have to notify the other processes and wait for acknowledges before it could safely close the reference.  Otherwise another process could attempt to access the TDMS file with a bad reference.  

 

 

One of the reasons I chose to use the TDMS file format was its out-of-the box support for concurrent access.  Perhaps properties writes don't work the same way as data writes do (as you hypothesize hooovahh)... ;)

 

You could solve this with custom Open and Close VIs that maintain a single TDMS reference, and a running count of the number of processes that have called “Openâ€.  Count up for Open and down for Closed, closing the file when the count gets to zero.  One could also use a named single-element queue to hold the file, as named queues have similar “count-the-number-of-opens†properties.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.