Jump to content

How to write header in continuous acquisition


Recommended Posts

I suggest to save the whole cluster at once. The idea is attached below:

[

2460[/snapback]

In something like this I runed couple of times, there is a best solution in my opinion..

First of all, you create an object allredy, not just a data, that need to be formated. Second, the size of file wil bee relatively small, third- you able without many calculations edit, or delete, or ramdom read that cluster you want, based on object information. If you hold file opened all the time during read-write session, (this functions are pretty stable), this not eating too much resources, just flush file sometimes.. I am using GOOP, so this way are best for me

I am sure, there is some more solutions, too. BRW, if you first store you data in FILO, by using Queue or functional global, so the one second interval will be more exact, if reading-writing process will be resource thirsty..

Good Luck!

/ProximaBleu

Link to comment

Hey guys,

I am able to write a header into the binary file now but I cannot read it in order to write it into a text file. I am using the vi Read File. What byte stream type sould I specify if the data file contains ascii header and binary data? I am attaching a snapshot of the vi I am using but it gives an error unflattened or truncated data. I tried setting the pos offset to start reading after a few bytes but it doesn't help. Any sujjestions?

Megan

post-761-1099090307.jpg?width=400

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.