Jump to content

Herbert

NI
  • Posts

    66
  • Joined

  • Last visited

Everything posted by Herbert

  1. I would recommend iterating over a TDMS file using "List Contents" and a for loop. That obviously works multiple times in a row. See attachment 1. The while loop / eof behavior was implemented to allow for a user experience that is somewhat consistent with the existing file I/O. We start at the most recently read group and iterate group-by-group. We do not consider offset and count in order to iterate within a group, we just go to the next group. If you want to iterate that way and you want to reset to the beginning of the file, you can simply set the group name back to the first group. There's another minor bug here in that wiring an empty string as a group name is not doing the same as not wiring anything, hence the case structure in attachment 2. None of these approaches has a measurable performance advantage over the other. Hope that helps, Herbert
  2. On further investigation, you might need to have ISAM support installed. MSDN recommends getting it from Borland (link), but there are 3rd party providers, too. That obviously does not solve the mystery of why MS Access can read dBase without having ISAM installed. If you use the dialog for configuring your "Microsoft Data Link" and you select Jet 4.0 and your DBF file, then test the connection, the error will say "[..] Unrecognized Database Format [...]". Now, go to the "All" tab of the dialog and pick "Extended Properties", click on "Edit Value..." and put in the specifier for the dBase version you are using. Now, go back to the "Connection" tab and press "Test Connection" and it will lament the absence of an ISAM component. Why are you using dbase anyway? Supporting legacy software? Let us know if you get it up and running ... Herbert
  3. First of all, thanks a bunch to everybody for digging these issues up and giving all this feedback :thumbup: . I was out of office for a conference in Orange County :beer: so sorry for not answering right away ... but here we go. 1) TDMS refnums are supposed to behave like any other file refnum. The fact that they return "true" almost all the time is a bug. For further reference, the issue is filed as CAR# 46KIDFWJ and can be expected to be fixed in the next major LabVIEW release. There is no great workaround for that, the best you can do is using a function like "Get Properties" (without group or channel name wired) or "List File Contents" and see if it errors out. 2) The "Refnum to Path" issue indeed is a nice catch. Again, TDMS refnums are supposed to behave just like any other file I/O refnum. CAR# 46KII9WJ. Sorry for the inconvenience. 3) Casting the TDMS refnum into an integer and checking for not zero will not work just like that. The list of files can be sparse, so a non-zero number wouldn't necessarily mean that the refnum is referring to an open file. On top of that, the 2 issues mentioned above might get fixed in a way that breaks this workaround, so I would rather not recommend using it. 4) TDMS functions are primitives. TDMS functions and refnums are implemented on the same core level as the normal file I/O. No XNodes or something. 5) I can see how a delete function would come in handy. We had even more requests for a merge function, I think someone actually programmed it and put it up on the ni.com discussion forums (link). 6) The rules for opening a TDMS file have been subject to some confusion. It has been a key requirement that TDMS files can be transported as one file (as opposed to TDM, which always needs 2 files). Hence TDMS is implemented so the index file is optional. However, we do prefer to have the index file, because it allows for quick random access into the TDMS file without loading much information. So when we find a TDMS file without index, we will generate the index from the TDMS file. Now, if you have the DIAdem DataFinder running on the folder that your file is in, the DataFinder will be the first to open that file and hence generate the index file. Files that are completely empty will be deleted on close. This requirement stems from people complaining about empty files on their disc after doing a simple open and close. Valid TDMS file created by LabVIEW, CVI or DIAdem should never be empty though. Again, if the DataFinder gets a hold of your file, that's how it will magically disappear. Warning: If you have a TDMS and TDMS_INDEX file [same names, same folder] that are not matching, you will likely get some kind of error. You will not be able to read data from that file. Workaround is deleting the TDMS_INDEX file. We're working on preventing this kind of problem. 7) The panel close thing on the TDMS Viewer is a known problem that has been addressed. I don't have the CAR# handy. Workaround: Use the "Quit" button :headbang: I hope I addressed all the questions you guys had in this thread, please don't hesitate to throw more at me. Thanks again for putting this together. Herbert
  4. I'm starting to feel bad about bringing this up all the time, but did you try DIAdem and the DIAdem Report Express VI? Except for the adjustable print-preview you describe, it seems to come with everything you're looking for. BTW, the Express VI is not password protected. It is also modular enough so that reusing it's subVIs (e.g. for preview or printing) is a reasonable thing to do. Hope that helps, Herbert
  5. You might need to play with the connection string. This website has a comprehensive collection of sample connection strings: http://www.carlprothman.net/Default.aspx?tabid=81. The connection string for Jet 4.0 has an optional part called "Extended Properties" that can be used to specify file types other than mdb, plus the parameters needed to open these files. I think it is sufficient to just include "Extended Properties=DBASE III" or "Extended Properties=DBASE IV" with your connection string. Worst case this might work for you. Hope that helps, Herbert
  6. Jeff, the 512 thing is something I cannot really confirm, since there's several "sweet spots" for chunk sizes that are floating around (the one I hear most often is 65535). Since we still add several bytes of index information every time we write, you would also need to use a different value to exactly hit 512. In general, I would rather use large data chunks to lower the number of times you access the disc. I would clearly recommend to open the file, then keep writing and finally closing it. The performance of opening the file does slightly decrease with the file size, because we're setting up index information for random access when we're opening a file. Flushing is something you only need if you have to make absolutely sure that all data you have acquired is on disc even in case of a complete system failure. Otherwise, just using TDMS Write will be perfectly sufficient. Hope that helps, Herbert
  7. I was referring to the context help (CTRL-H) of course. I always get these confused. Herbert
  8. The problem is not with a particular function, but with all functions that create subarrays, e.g. transpose, index, or subset. Subarrays are used in LabVIEW to save memory by not making unnecessary copies of an array. Transpose will not actually transpose the array, but it will attach a piece of information that flags this array as transposed. Index and subset will not create new arrays (or values) that they copy data into, they will just add a piece of information to the array that describes which part of the array is to be used by subsequent functions. Indexing tunnels will do the same thing. That way, it's up to the functions down the wire to decide whether or not to make a data copy. Arrays that carry this kind of additional information are referred to as "subarrays". The online help will show whether you have an array or a subarray when you hover over an array wire. Hope that helps, Herbert
  9. That's a bug with TDMS. TDMS Write will handle subarrays either correctly or incorrectly, depending on what array dimension the "sub" applies to. Apparently, the "transpose array" you are using on the output of the "item names" property puts out the kind of subarray TDMS Write struggles with. There are 2 related issues you might need to be aware of: - Several TDMS functions have problems working with subarrays wired to "channel names". - TDMS Set Properties doesn't appreciate subarrays for either "property names" or "property values". All of these issues are fixed by now. I ran your VI with a current LabVIEW dev build, and all three cases work just fine. For now, the workaround for all of that is to turn the subarrays back into regular arrays, which is what the additional for loop is doing in the third case. If you place that for loop right behind "transpose array", the whole VI is going to work. Sorry for the inconvenience. Herbert
  10. My favorite 8.0 feature that nobody talks about is the gravity well that aligns labels with terminals on the block diagram. I use that hundreds of times a week and it makes the life of my cursor keys a lot easier (and longer). Yet another 8.0 feature that nobody talks about is the DIAdem Report Generation Express VI - which is partly because not everybody has DIAdem installed. So I have made talking about it really easy by videotaping it for you :thumbup: Have fun, Herbert Download File:post-7544-1169255337.wmv
  11. You might want to have a look at this example VI: ./examples/file/plat-tdms/TDMS - Write data (events).vi. It logs an array of values, a timestamp (fully qualified) and a comment string within a loop. An example that reads the file is included, so is a more generic TDMS File Viewer. If you are using an older version of LabVIEW, try ./examples/file/storage/Write Event Data.vi. Does the same thing, but with a TDM file as opposed to a TDMS file. Reader / viewer are included here as well. Hope that helps, Herbert
  12. LVM will definitely work for this use case. However, I would suggest using the TDM Streaming API and TDMS files (in older LabVIEW versions TDM and the Storage VIs). Both give you a better way of formatting your data in LabVIEW so it comes up in DIAdem exactly like you want it. That includes data types, names for channels and groups, and arbitrary descriptive properties. Plus of course, the TDM Streaming API will rid you of having to deal with Express VIs . You might want to look at ./examples/file/plat-tdms.llb/TDMS - Write data (events).vi for some inspiration. Hope that helps, Herbert
  13. The DIAdem files LabVIEW needs for the Report Generation Express VI need to be in different places depending on whether you are in a LabVIEW development system or in a built app. LabVIEW does not automatically include any of these files with source distributions, built executables, installers and such. You need to manually add these files to your build specification. Both files are expected to be in the same path as the executable you're creating. .\vi.lib\Platform\express\PicExport.vbs .\vi.lib\Platform\express\UpdateGraph.vbs For the layout path, I recommend that you determine the path for the layout based on the LabVIEW path constants (e.g. "Current VI's Path") and wire it to the Express VI. You can do the same thing with the target path for the PDF. That way, you don't have to rely on that the same path exists on the target machine. BTW, if you were using built-in layouts, these layouts would be expected in a sub path of your application directory called .\templates. They would have to be added to the build spec manually from .\vi.lib\Platform\express\templates. I know this is a fairly nasty problem. Fixing it on our side will be pretty high effort though, so we might have to go with the workaround for a while. Sorry for the inconvenience. Hope that helps, Herbert
  14. Herbert

    Hallo LAVA

    Dear LAVAs, I just joined your site and I'm excited to see how active a community LAVA is. I used to work for GfS in Germany (DIAdem R&D), then I spent some time at NI Services (now measX), designing ORACLE databases for some big automotive players. Then I joined the force ... I've been with the LabVIEW Ease of Use team for about 4 years now, working on many things including Storage VIs, LVOOP and DIAdem Report Express VI. My most recent contribution to the LabVIEW Supremacy is the TDMS file format and API. See you guys at the Lava Lounge. Herbert
  15. You might want to have a look at http://forums.lavag.org/index.php?s=&s...ost&p=23027. Herbert
  16. Short version TDMS was created so people wouldn't have to decide between so many different file formats any more. It's not all done yet, but as far as saving measured data (= anything that can be looked at as waveforms / 1D arrays) goes, TDMS beats all other formats in LabVIEW in writing and reading performance (see the fine print below ). => If it is reasonable for you to break down your data into properties and data types that TDMS can handle (1d/2d arrays of numerics, strings, timestamps + all kinds of waveforms), we clearly recommend TDMS. If your data types are too complex for that, Datalog / Bytestream is your best bet. Long version Prior to making TDMS, we put together a set of reference use cases (ranging from 1-channel-super-fast to 10000-channels-single-point) and ran benchmarks on these use cases with all the different file formats we had. The result was that most formats were good at something, but every format had significant disadvantages. Some examples: HDF5 is great for saving few channels very fast. If you have 100 or more channels though, or if you keep adding objects to the file (e.g. when you're storing FFT results), performance decreases exponentially with the number of objects. Both Datalog and HDF5 maintain trees for random access, which creates hick-ups in performance that usually exceed 0.5 seconds a piece. For streaming applications, 0.5 seconds is a very long time. TDM was developed for DIAdem, where every file is loaded and saved in one piece. It stores every channel as a contiguous chunk of data. If you want to add a value to the end of a channel, you need to move all subsequent channels in order to make room for that value. We have done a few things to diminish this issue, but the bottom line is that TDM is not suitable for streaming at all. TDM stores the descriptive data into an XML file. That creates the following issues:You always have 2 files you need to copy, delete, email or whatever. XML files are read, parsed and written in one piece. The performance of adding a new object decreases with the size of the XML file. XML is slow (think 10000 channels) The TDM component uses XPath to query for objects, which rules out using of pretty much any special character (including blanks) [*][...] TDMS was built to eliminate all the issues listed above. Even though the "S" stands for "Streaming", TDMS in LabVIEW 8.20 beats all other file formats we have in LabVIEW in writing and reading performance. There are some areas we're still working on though, as you can see in the following fine print. Fine print With very low channel numbers and high throughput, HDF5 is still writing about 10% faster than TDMS. Unstructured LabVIEW binary files in some cases beat TDMS by some percent points (but try reading them...). If you store single values, we recommend that you gather e.g. 1000 values in an array and store that array. Otherwise, reading performance will be very bad. Note that a LabVIEW version that is coming up really soon will be able to do that automatically. If you figure out more of these, please don't hesitate to let me know. Hope this helps, Herbert
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.