Jump to content

Herbert

NI
  • Content Count

    66
  • Joined

  • Last visited

Community Reputation

0

About Herbert

  • Rank
    Very Active

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree that classes in general should be tested through their public interfaces. On the other hand, I would want to design my tests so they lead me to the root cause of a problem in the shortest amount of time possible. If a "black box" test using my public interface fails, I don't want to have to dig down my VI hierarchy in order to find the root cause. Not if I know that a "white box" test inside the class could have provided me with that information without me doing anything. So I guess I want to test both the public interface and the private methods. Obviously, black box testing is the only way of making sure that you're testing the exact behavior your class will expose to its callers. A white box test can interfere with the inner workings of a class, bearing the risk that it alters the classes behavior or otherwise produces results that couldn't occur in a black box test. So, if a black box test fails, I'll probably have to fix my code. If a white box test fails, I might have to fix the test instead. Sometimes it's worthwhile adding and maintaining a white box test, sometimes it's not ... I strongly encourage everyone who is interested in unit testing to watch out for new releases on ni.com/softwareengineering and related content on ni.com/largeapps on Friday, 02/06/2009.
  2. I might look at this through my TDMS glasses too much, but to me, the natural way of storing the events you have mentioned would have been to create a channel for each cluster element - where the channel is of the same data type as your cluster element. I realize that this requires you to unbundle and bundle the cluster for writing and reading, respectively. But you wouldn't loose any numeric accuracy, any timestamp tidbits or other things. The only advantage I can see in storing everything as strings would be less coding. Am I missing something there? I have thought a lot about allowing arbitrary clusters in TDMS. The problem, as you mentioned, is, that you don't know what kind of data you're really dealing with, so it's impossible to magically do the right thing. Some cluster elements are better off being stored as properties, but how would I know? If I store them as properties because they are scalar, I'm out of luck if they change their value after 1000 iterations. Similarly, what would I do with a numeric array in the cluster? Create a channel? Append the array values from the next cluster to that channel? What if these are FFT results? I have not been able to come up with a good way of identifying these things automatically. Of course, you can always come up with some fancy piece of UI that allows users to assign cluster elements to TDMS objects (smells like Express VI ), but the best interface we have for making that assignment is the block diagram. If a cluster doesn't contain arrays or other clusters, you could make a case for that we should handle that by making each cluster element a channel. That would be a viable thing to do. But when it comes to nested clusters and clusters that include arrays, providing "automatic" handling creates expectations that can hardly be fulfilled. Herbert
  3. QUOTE(Kevin P @ Jun 19 2007, 10:50 AM) Kevin, the second option will not work. The executable needs to be compiled with the same version the runtime of which you are using. The incompatibility is not in the TDMS files, it exists between LabVIEW-compiled code and the LabVIEW Runtime Engine. So, I'm afraid the first option will be the only way to go (except for updating everything to 8.2.1, regardless of the DAQmx version). Herbert
  4. Herbert

    Scarry...

    I had a chance of seeing motorcycle traffic in Vietnam recently. People participating or joining traffic never look to left, right or back, but that's ok since everybody is aware of it. If you want to pass by someone, you generally honk, so they know something is coming from behind. The whole thing might look quite familiar to you if you have one of these screen savers that simulate a fish swarm. There are more details, but it is scary enough just like that. Herbert http://forums.lavag.org/index.php?act=attach&type=post&id=6142
  5. QUOTE(Thang Nguyen @ Jun 18 2007, 04:49 PM) As long as your data values are equally sampled, storing them as waveforms is a lot more efficient. A timestamp is 128bit, a double value is 64bit. So, by not storing a timestamp with every data value, you save 2/3 of disc footprint and performance. Of course, if your data is not equally sampled, this is not useful for you. In that case, you need to store time stamp and data values to different channels. If it is easier for you to read out, you can always store time and data to different channels, but it might become a performance bottleneck in your application. QUOTE(Thang Nguyen @ Jun 18 2007, 04:49 PM) I have never come to Caffee Suoi Da, actually, there are a lot of coffee in Vietnam ( a lot ...). Plus, there is always the B52 :thumbup: . (If only I could figure out how to embed videos in my posts...). QUOTE(Thang Nguyen @ Jun 18 2007, 04:49 PM) I lived in Saigon. . Do you feel hot there? Not really. I live in Austin, Texas. That's about as hot as Saigon. I wouldn't dare driving a motorcycle in Saigon, though ... Herbert
  6. Thang, A) The idea here is that users should never have to touch properties like wf_increment or even know about them. We use the wf_xxx properties to store things that are embedded in LabVIEW data types (e.g. T0 and dT are embedded in the waveform data type). If you use waveforms correctly, all of these properties should be written and read without you doing something special. That of course only works if the waveforms have the correct values in them. Since you are asking - here are the important ones: T0 is saved to wf_start_time (timestamp). dT is saved to wf_increment (double). If your data is not time-domain, wf_start_time will still be set, but your X0 value goes into wf_start_offset (double). This will happen for example with frequency-domain data or histogram results. If you exchange data with DIAdem, you need to set the wf_samples property to something other than 0 (we ususally set it to the number of values in the incoming waveform, so in your file, it is 1). DIAdem will use this property to determine whether a channel is a waveform or not. B) That's exactly right. The only thing you need to do is set the property NI_MinimumBufferSize (integer) for each of your data channels to 1000 or 10000 or something. The TDMS API does the buffering automatically (requires LV 8.2.1). This is not crucial to the functionality of your application, but it will speed up writing and reading quite a bit. Unrelated) I see from the flags on your account that you're from Vietnam. I just came back from 2 weeks of vacation, visiting friends in Vietnam. They took me on a roundtrip through the country, including Hanoi, Ha Long, Nha Trang and Saigon. Best vacation I had in a long time. I'm addicted to Caffee Suo Da now :thumbup: Herbert
  7. Herbert

    Digital Waveform

    Can't you just go with the one waveforms you acquire and split it up, e.g. using "Get Waveform Components" combined with "Get Digital Components" or using some of the functions on the "Digital Waveform" -> "Conversion" palette? Herbert
  8. QUOTE(Thang Nguyen @ Jun 18 2007, 12:23 PM) Looking at the file with the TDMS Viewer, what you have is: different channel lengths (high freq channels have 618 values, low freq channels have 224) same dT (1.00 for all channels) varying starting times (T0) for every channel It looks like you are using waveforms to save single values. In that case, I'm not sure that DAQmx or other functions that put out waveforms will set dT correctly, because there is no second value to reference to. If you save a series of single values to a waveform channel, you need to be really sure that they are equally sampled. If you're not sure, you should rather split up the waveform data type and store the timestamps and the data values to different channels (e.g. one timestamp and one double channel). Saving single values to TDMS like this is also not a very efficient thing to do. It is a lot more efficient to gather a bunch of values and write them as a larger array. You can have the TDMS API do that for you by setting the channel property "NI_MinimumBufferSize" to the number of values that you wish to buffer. In your case, good values might be 1000 or 10000. Hope that helps, Herbert
  9. Try the VIs in vi.lib\utility\libraryn.llb. Herbert
  10. QUOTE(Darren @ May 17 2007, 05:16 PM) Consider my vote changed County Line. Map. Menu. :thumbup: Unless of course the Salt Lick tradition is really important. I joined in January. I wouldn't know. Herbert
  11. My favourite Austin BBQ joint is "The County Line on the Lake". Should be large enough for really big parties, but they can't score on tradition and BYOB. I also think they don't have all-you-can-eat. So, I'm in on the Salt Lick. Herbert http://forums.lavag.org/index.php?act=attach&type=post&id=5898
  12. QUOTE(torekp @ May 17 2007, 08:43 AM) I'm almost sure you've seen it, but just in case ... I posted some more details on how we benchmark file formats at NI on http://forums.lavag.org/index.php?s=&showtopic=7939&view=findpost&p=30185' target="_blank">this thread, including prerequisites and the actual VIs we use to run our benchmarks. For relatively short periods of writing, the profiler returns only the time it takes to shove your data into the Windows buffer, but that doesn't mean it's on disc yet. Don't yell at it - the poor thing doesn't know any better Herbert
  13. QUOTE(Tomi Maila @ May 16 2007, 12:39 PM) No problem. Let me know how version 1.8 holds up... Herbert
  14. QUOTE(Tomi Maila @ May 16 2007, 11:28 AM) Tomi, I used HDF5 version 1.6.4. The LabVIEW API for that was never released to the public. I also don't have that code in my benchmark tool any more. You might need to rip some stuff out of the code, e.g. DAQmx or HWS, depending on what you have on your machine. Adding a format is rather simple. Just add it to the typedef for the pulldown list and add new cases to the open, write and close case structs. Some remarks: Simulated data comes with the usual LabVIEW Express waveform attributes. DAQmx data comes with properties taken from the DAQmx task definition. One / multiple headers means that objects will be created and properties written on the first write only / on every write. File formats like Datalog (2d array of DBL) will obviously not contain any descriptive data. The red graph/scale show what performance would be if LabVIEW did nothing but writing to disk. The white indicators show the actual values. Groups=-1 means the app will just keep going. The results are written out as a TDM file. Subsequent runs will add to this file. Hope that helps, Herbert http://forums.lavag.org/index.php?act=attach&type=post&id=5888''>http://forums.lavag.org/index.php?act=attach&type=post&id=5888'>http://forums.lavag.org/index.php?act=attach&type=post&id=5888
  15. QUOTE(Gary Rubin @ May 16 2007, 10:53 AM) Yes. Herbert
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.