Jump to content

Kevin P

Members
  • Posts

    63
  • Joined

  • Last visited

Posts posted by Kevin P

  1. QUOTE(Gabi1 @ Sep 19 2007, 11:23 AM)

    Well, there's a tradeoff. I wasn't meaning you should poll as fast as possible with the timeout to 0. I was thinking of a timeout of anywhere from 5-50 msec, depending on the reaction time you need. A bit of a compromise but easy to implement.

    QUOTE(Gabi1 @ Sep 19 2007, 11:23 AM)

    2: do you mean an occurence that could be fed by two occurence generators? i am not following
    :unsure:

    Attached is a picture of what I *meant* to mean. As I threw it together, I realized that all the Wait and While Loop termination issues can get non-trivial pretty quickly. If you use a -1 timeout to "wait forever", you need to be sure to have a foolproof method to shut down those Wait primitives. This is easier with Notifiers which can be forcibly destroyed elsewhere in your app's shutdown code, forcing any pending Waits to return with an error. The external occurrence should probably be set here in this vi after completing the code you need to run.

    You may also need to carefully consider the "ignore previous?" inputs, depending on the nature of your app. You can get stuck with some subtle race conditions when those are set "True." My typical solution to that is to wrap the Wait in a while loop and do special things on iteration 0 to deal with any old, "stale" occurrences / notifications. Usually this means a timeout of 0, ignore previous=False, and do nothing else until iteration 1.

    Anyhow, given the alternative here, polling starts looking not so bad, eh?

    -Kevin P.

  2. I'd think there are 2 basic options:

    1. Use a polling loop such that your "Wait" calls have a very short timeout. Then check the boolean "timeout" outputs to determine whether you've received an occurrence or notification. You'll burn a little extra CPU doing the polling, and your reaction time will be throttled a bit when one thing fires but the other has to wait for its timeout before you react.

    2. Have the reaction code use its own private internal occurrence. Your main code waits for that private occurrence to be set. One bit of parallel code will wait for the external notification, then fire the private occurrence. Another bit of parallel code does the same with the external occurrence. Whichever external event happens first will allow you to react to the private occurrence.

    -Kevin P.

  3. Veering back to address the original topic of in-placeness during type conversion. I tried (inexhaustively) to probe this a bit for arrays of U32's being converted to I32's. I either auto-indexed a U32 array on a loop output or I started with a front panel array control. I compared the buffer allocations against cases that started with I32's and did no conversions. I saw no differences, making me think once again that U32-->I32 conversion occurs in-place.

    And I'm still inclined think that no memory copying is done, though I didn't retest recently. I think all bits stay put, but the conversion to I32 will cause them to be interpreted differently.

    Anyone have more definitive info?

    -Kevin P.

  4. Good question. I'm particularly interested in the I32<-->U32 kind of case.

    I've been carrying around a factoid in my head for several years that such a conversion involves merely a reinterpretation of the bits and does not have to make a data copy. Probably back somewhere around LV RT 6.1, I did some execution time testing for converting a large array of U32's into I32's. At the time, I thought my observation was that the execution was too fast to include any memory allocation (didn't have the "Show Buffer Allocations" tool available). So I took that to mean that conversion between signed and unsigned ints did nothing to change or copy the bits in memory, it just reinterpreted them.

    I've been relying on that factoid for a long time now, and will now have to see if I've been fooling myself.

    P.S. Were there any weird behaviors of the "Show Buffer Allocation" tool, especially when it first came out? Somewhere along the line, I developed a degree of skepticism about that tool though I don't recall exactly why. It seems like everyone else finds it to be extremely reliable.

    Same on the skepticism for the Profiler tool, only much more so. I have only rarely found it to be even a little bit helpful at tracking down memory or CPU hogs. I had an app running 24/7 for weeks. Windows' task manager would show me losing ~2MB available memory per day. But the Profiler couldn't find any vi with growing memory consumption. The max usage of any vi kept showing up at about 350 K, even after Task Manager had dropped by 100 MB.

    -Kevin P.

  5. Thanks for joining the conversation, Herbert. I'll see if I can present my case more clearly.

    There are 3 reasons I brought up the idea of converting clusters to strings for the sake of writing to TDMS.

    1. There's no native support for writing clusters.

    2. Such cluster unwrapping to and from strings has been (largely) figured out and is available as open source via OpenG and MGI. My understanding is that PJM's earlier post I referenced was only a very slight tweak to write clusters as TDMS properties rather than writing them to a text INI file. I further slightly tweaked it to handle DAQmx global channel names as human-readable strings. The point being, most of the tricky work is already done to support these conversions to and from strings.

    3. One further reason I left out of the first post -- it may be an added benefit to be able to read the file from Matlab. We have a site license for Matlab but very few people use LabVIEW and no one I know has Diadem. Personal installs (such as a LabVIEW executable for handling the data, LabVIEW run-time, or even the Excel TDMS plugin) aren't allowed on our networked PC's. So, if people really want to dig through portions of the huge data files that TDMS makes it so convenient to create, I pretty much have to give them a Matlab script. I'm not real nimble in Matlab and the Matlab interface example from ni.com keeps erroring out in a weird way (in trace execution, it appears to execute each of the datatype cases in the 'switch' structure' in order as though falling through like C, then terminating with error). Sooner or later, given enough time, I expect I can work out my problems with the Matlab script. But I can anticipate that new kinds of trouble could arise if I try to use Matlab to retrieve unusual datatypes from TDMS. Binary timestamps are one such type I'm not sure of and any kind of cluster could be a whole new type of difficulty.

    Now then, the alternative -- native support for clusters. You mentioned the difficulty of automatically treating cluster elements the "right" way, where some may be treated as properties and others as channels. That's not the feature I'd want. I have some full clusters that I want to write as TDMS properties and other full clusters that I want to write as TDMS channels. As the app programmer, I'll know when to call TDMS Set Properties and when to call TDMS Write. I just want to be able to wire my typedef'ed clusters directly into those TDMS primitives. Some clusters may have dozens of elements and I don't particularly enjoy having to maintain all the unique group, channel, and property strings that tend to change with every app, not to mention the extra hassle of all the unbundling and re-bundling.

    As I view it, I'm defining the most important association by bundling the cluster elements together in the first place. If the clusters contain an array of data, that array of data associates to the other elements in the cluster. That's why they were bundled -- they provide one another's context. I don't need one cluster's array of data to be combined with another cluster's array of data to make them contiguous. Just treat the entire cluster as one specific value of a special datatype, just like a DBL or U32 would be treated.

    Nevertheless, I can see that native support for clusters as binary data creates some difficulties for other apps like Diadem that need to read the TDMS file. So I kinda figured it might be more realistic to take an approach similar to PJM's example but where, in the end, each cluster element becomes a separate channel. The beauty is that at the app programming level, I simply wire my cluster to one simple "Write" function and all the unbundling into separate named channels happens under the hood. Similar with the "Read" function where I wire my cluster in to define the datatype. When developing the data writer and reader part of the app, it provides the functional equivalent of native binary support for clusters. The main downside is that the OpenG and MGI methods are kinda "brute force" and can't be nearly as optimized as native support could be. Also, re-clustering the individual channels might be difficult from, say, Matlab.

    Ok, maybe now I see your point better. In the OpenG-like approach, if a cluster element is an array of numerics, and that array becomes its own individually named TDMS channel, then I'm not sure I can recover the various array chunks back into the appropriate cluster elements after the fact. I suppose any such array element in a cluster might need to write additional metadata such as dimension size(s) for each chunk as yet another named TDMS channel. Or something. Then again, maybe it already does? (Not near an LV PC to check right now).

    Again, thanks for engaging the topic. I'm still on the TDMS learning curve, and am sorting through things as the come along.

    -Kevin P.

  6. Ok, more growing pains with TDMS. This time, I'm looking for advice on a strategy to use for storing a cluster of info as a channel. PJM's help in this thread was terrific to help me put single-valued clusters of config info into TDMS. However, I dug myself a hole with timestamps and daylight savings time that I recently discovered.

    My specific app queues up asynchronous "Events" as a cluster of info, including a timestamp, a typedef'ed enum for event ID, a couple strings and a couple numerics (DBL and I32). Since there are multiple instances of Events during the app, I need to store this info as a TDMS Channel. My misstep was to convert the cluster to an array of strings and treating the string array as a channel. Unfortunately, I converted the timestamp to string in a way that makes it disagree by 1 hour (daylight savings) with timestamps saved elsewhere as a raw timestamp datatype.

    So one workaround that can solve my problem today is to separate the events into 2 channels named "event_timestamp" and "event_data" so I could store raw timestamp datatypes to their own named channel. But it feels inelegant, and I'd like a better solution if possible. Specifically, I'd like to deal with this once for a generic cluster and be able to keep re-using it for a variety of as-yet-undefined future clusters of data.

    In PJM's posted code, a (presumably) small variation on the OpenG config file vi's, it appears to me that eventually, all the cluster elements get transformed into strings which are then written using 'TDMS Set Properties'. If I were to substitute 'TDMS Write' at that level, and map the "Key" as the "Channel" input, I *think* I would be able to store a (nearly) arbitrary cluster as a TDMS channel. A similar substitution on the Read side would hopefully pull the info back into the cluster. Of course, that particular quick-and-dirty method would cause all clusters to be stored as channels -- even config info, which maps more naturally into a set of TDMS properties. So I'd like to have 2 different top-level calls. One to store a cluster as a set of TDMS properties, another one to store a cluster as an element of channel data. (And of course 2 more for the subsequent Reads).

    The problem: I know I'd need to change the vi's at both the highest and lowest level to support this capability. But what about all the layers of interdependent OpenG vi's in between? Hate to duplicate them all under a new name. Not sure that adding extra input(s) throughout the hierarchy is such a good way to map through from intent at the top to implementation at the bottom either.

    Another possibility that's occurred to me is to separate the process of converting a cluster into a set of Key/Value pairs of strings for each cluster element from the process of writing those strings. That method would pass the raw strings at the lowest level all the way back up the call chain to the top level. Then the top level could decide how / where (/ whether?) to write them. This approach *might* result in 95% common code between writing a cluster to an INI file, writing a cluster as a set of TDMS properties, writing a cluster as a TDMS channel, writing a cluster in 99 other ways...

    Thoughts / ideas / critique welcomed.

    -Kevin P.

  7. I've rarely done software-timed data acq, but I did just throw together a dirt simple example on an M-series board a couple minutes ago. Created 2 AI tasks. Called DAQmx Read in 2 separate While loops that had no delays. I got the same error as you a fairly low (but non-negligible) percentage of the time. I then wrapped both DAQmx Read calls in an additional While loop that would "continue on Error", and put the output iteration counts into a pair of charts.

    Most of the time, I'd get 0 counts meaning that both Read calls worked without collision. When there was a collision, it was pretty dramatic, requiring 10's and 100's of retries. The collision rate increased dramatically when I moused stuff around onscreen.

    I know that kind of error is asserted when you try to define 2 separate hardware-timed tasks with their own sampling clocks, but I didn't realize it could also happen on software-timed AI. I guess the timing subsystem does get reserved and used briefly as the driver negotiates the multiplexing and issues a single "sample and hold" pulse for the A/D.

    Personally, I wouldn't have expected the driver to keep re-trying on its own. But I approach from the perspective of almost always doing hardware-timed data acq -- if there's a conflict when I make the call, it's very likely to remain throughout any reasonable timeout period. So I'd just as soon have the function call return quickly with an error. That said, I definitely see your point too. It just looks like the driver implementation is also biased to a long-term hardware-timed acq point of view.

    I'd think that your best bet is to use van18's semaphore suggestion. I haven't yet found a need to use it for NI cards, but I did make a similar wrapper-with-semaphore in an app where multiple parallel processes needed to talk to an external instrument over VISA.

    -Kevin P.

  8. Apologies in advance for my thick-headedness about timestamps and time string formats. I just haven't dealt with them much before. Further comments below not meant to be argumentative, just trying to continue the conversation.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    Still wrapping my head around this UTC stuff. The NI forum link was interesting but pretty confusing. Not sure I learned what I should have learned, and not sure how much what I should have learned can help. Regardless, anything involving a change to the "Format Into String" type specifier will be a software revision & release cycle. And I *think* that if it comes to making a software rev, my planned workaround of writing the time to TDMS as a raw timestamp datatype is what I'll probably prefer. The format into string was done only because TDMS doesn't support the ability to accumulate an array of custom clusters.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    As far as I'm aware there's no difference between developement and deployment, but you might want to check the app's ini file

    The difference, if any, might be based on other characteristics of the different PCs. During development, I wrote and read on the same PC. I don't recall seeing the 1-hour discrepancy between times written as timestamp datatypes and times converted to and from strings with the '%T' specifier. During deployment, I'm reading on a completely different PC that uses XP pro rather than XP home and may have different Daylight Savings settings. And I'm seeing a 1-hour discrepancy. I just don't know if that discrepancy is irreversibly encoded in the file, or whether there's someway to avoid it or undo it after the fact when I read.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    Eastern Daylight Time (and EST = Eastern Standard Time).

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    Study the

    Should have done it before, but even now it isn't at all clear to me how to use that info to predict the possible interactions with Daylight Savings settings.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    Ok, I'm starting to grasp this a little better. In my case, I'm not concerned about probes or front panel controls. My specific need is to be able to read a TDMS file and export data and analysis results to an ASCII file. The TDMS file contains 2 kinds of time info. Some of the time info was converted to ASCII strings before writing, some was stored in binary as a timestamp datatypes. Now I read the file on a different PC, which might have different Daylight Savings settings than the PC that wrote the file. I need to export that time info to an ASCII file. I'd like both types of time info to turn into ASCII strings that agree about the hour. I'm still not sure whether I have a chance to do so reliably or whether my initial "Format Into String" produced an irreversible ambiguity.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    I'm not sure but I believe the DIADem timestamp is just 64 bit and not 128 bit. The best way to store and restore a LabVIEW 128 bit timestamp is using two I64, one for the second and one for the partial second.

    All I know is that TDMS accepts timestamp datatype directly with no coercion dots.

    QUOTE(tcplomp @ Aug 10 2007, 02:04 PM)

    EDIT: Just saw you use 7.1 this version has some DST issues (see the NI discussion)

    Sorry -- actually using 8.20 Upgraded this spring for TDMS on the app in question. Got errors when I tried to update my profile just now.

    -Kevin

  9. I discovered a little quirk that has been discussed before, but I haven't got a handle on exactly what *I* need to do in my app. It has some relation to how Timestamps and Daylight Savings interact, but I'm not sure I complete understand the rules.

    Background:

    I have released code whose data shows a quirk that I never noticed during development. In question are 2 bits of code that run in parallel in my app. Both retrieve Time information from the same "Get Date/Time in seconds" primitive. Both write that Time information to a TDMS file, but with different methods. When I read that time info back on a different computer, the sets of timestamps differ by 1 hour. I strongly suspect this is tied in with the handling of Daylight savings. The computer that creates the TDMS file is offsite and I don't know its Daylight savings settings. I do know that the file was created in mid-May (when Daylight savings is eligible to be in effect). I'm reading the file now in August on a different PC.

    One bit of code is a loop that stores a timestamp and an array of temperatures at a regular rate. These timestamps are stored as a timestamp datatype to a TDMS channel. They accumulate as a 1D array of timestamps that correlate to a 2D array of temperatures.

    The other bit of code is an event logger that runs at irregular intervals, on-demand whenever parts of the app code call it. In my app, my events are a cluster containing a bunch of potentially relevant information. One cluster element is a timestamp datatype. However, TDMS doesn't allow me to treat a cluster as a channel. My workaround was to create converter subvi's that would convert back and forth between my cluster and an array of strings. I then wrote this 1D array of strings as a TDMS channel. My converter functions are "Format to String" and "Scan from String" and both use the '%T' timestamp type specifier.

    Finally, I'm mostly but not 100% entirely sure that these 1 hour discrepancies didn't show up during development, when I created and read the TDMS files on the same PC. I am quite certain that the Event times agreed with the PC clock though.

    Timestamp <--> String & Daylight Savings Questions:

    1. Exactly when and how does Daylight Savings influence the timestamp? How can I store and recover this time info in a consistent way so that the times don't have these 1-hour discrepancies?

    Thus far, all the data in question has been both written and read during EDT. But in general, I need to handle writing in both EST and EDT and then reading on a different PC in both EST and EDT. I'd prefer a solution that can accomodate the way the data is currently written so I can avoid rolling a rev of the code. (The internal verification procedures for a code rev are far from trivial.)

    2. Specifically, let's first consider converting timestamp --> string. I wire a timestamp datatype into "Format to String" using a '%T' type specifier. Let's consider the time portion of the resulting string. Will it depend on what today's date is (whether eligible for DST)? Will it depend on what date is represented by the timestamp? Or will it only depend whether one is during DST and the other isn't? For all questions, will it further matter whether Windows is set to auto-adjust for DST? (Sorry, can't test. I'm administratively locked out of changing date/time settings on my PC). Will it matter whether I'm running XP Home or XP Pro?

    3. Now let's consider converting string --> timestamp. I wire the string result from the operation above into "Scan from String" using a '%T' type specifier. What things will affect the raw bits stored in the timestamp datatype? PC date within DST? Time string date within DST. Differing DST status for PC and time string date? Windows DST auto-adjust setting?

    Timestamp <--> TDMS & Daylight Savings questions:

    Same kinds of questions as above, but in the context of wiring a timestamp datatype as an element of "channel" data into TDMS Write, and how those timestamp bits get interpreted on subsequent TDMS Reads, possibly on a different PC, possibly in a different timezone, possibly on a date whose DST status is different than the DST status of the timestamp itself or the PC that wrote it to the TDMS file.

    Finally, if I do have to rev code, my main idea for workaround is to treat the event timestamps like I treat the temperature timestamps. I'll create a separate TDMS channel named "event_timestamp" as part of the "Event" group, and store the times as raw timestamp datatypes. All the other event cluster elements will convert back and forth to a 1D string array named "event_info". Anyone have any better suggestions for storing a moderately complex cluster as a TDMS channel?

    -Kevin P.

  10. Getting back to the original question...

    I think the very first reply from Mikael gives the answer. The explanation *why* is that the Timestamp function used in the original posted code only has a time resolution of 15.6 msec. It's simply quantization error. One call occurs at, say, (X).9998 quanta and the next call occurs at (X+1).0003 quanta. The reported time difference isn't because the execution actually took longer, it's because the measurement got quantized.

    The msec timer has a resolution of 1 msec which reduces the quantization error considerably.

    The rest of the story is that Windows will also occasionally give you a *real* delay that is often in the 10's of msec because it decided to do something other than service your LabVIEW app. You can't count on avoiding these delays, but the msec timer or a Timed Loop at least give you some ability to detect and measure them.

    -Kevin P.

  11. I don't have LV handy to be able to post a screenshot.

    What you posted isn't quite what I had in mind. My idea -- and it's just an idea, I'm not sure whether it'd be better or worse until after benchmarking -- would be to set up a For loop, explicitly index values out of Array 2, and use "Replace Array Subset" to overwrite the corresponding elements of subarray 2.

    I'd expect the average execution speed to be noticeably slower than the LV array subset primitive, but I'd also expect it to be more nearly constant and predictable. At least, that'd be the desired tradeoff, not sure if it'd work that way. Because it seems I've also observed (I think) that allocating chunks of memory in the kB size range and smaller don't seem to slow execution speed measurably.

    I wonder also if it may be helpful to incorporate a queue into the processing scheme. I know when I have producer-consumer loops in data acq apps, I wire directly from DAQmx Read to Enqueue Element to pass data without extra allocation. The queue is given ownership of the memory space, and only stores a pointer or something like that. Later, the call to Dequeue transfers the pointer & ownership to my consumer code, and I can do processing without having had to pass it through controls and indicators, where extra data copying might have happened.

    Anyone have insight into Gary's original question about execution threads?

    -Kevin P.

  12. Don't have an answer here, just some questions in hopes that they may be useful mental prods for someone.

    1. Based on the earlier camera, I assume you've got X-Y boundaries pretty well defined for the expected parts. Is this right?

    2. At the time of scheduling your X-axis sensor, are you able to know/predict how much Y movement occurs for any given delta X? I.e., you know the speed ratio of belt and sensor?

    3. What's the nature of the detailed inspection? Relative to the original X-Y boundaries of the part, does the detailed sensor produce a single data point of measurement, a small linear image vector, or a small XY image array? Or is it a case where your inspection measurement improves as you allow the sensor more time to collect?

    4. Are all objects equally time-consuming to inspect? Do you need more points for larger objects? Is it preferable to inspect from a position near the centroid of an object or does it not matter?

    The way I'm thinking, you need to somehow consider the both the Y-extent of each of the objects you'd like to visit and inspect, and the delta-X proximity from the sensors most recent position. Some priority must be given to objects that will soon move beyond the sensor.

    So you've got some sort of path-generation problem where you must land on certain points within objects using little line segments. The line segments are constrained to be either purely in y (hold sensor stationary as belt moves by), or diagonal with constant slope (based on constant belt speed divided by maximum sensor movement speed).

    I don't know the field, but I'd also guess there are some algorithms out there that do this kind of thing if your X,Y targets are fixed (centroids). It may be tougher to optimize if you physically cannot hit all targets and you must make decisions about which and how many targets to miss. It also may be tougher if you try to consider landing anywhere within an object's X,Y boundaries rather than specifically targeting a single point such as its centroid.

    -Kevin P.

  13. Dunno how much/if this will help, but figured I'd add a thought to consider. When I've got apps that repeatedly process not-so-small chunks of data, I at least consider the following approach:

    - Try to design the code to process fixed-size chunks (or at least to define an upper bound to the size)

    - Processing routines that generate a buffer allocation dot are made into subvi's. The output data requiring a buffer allocation becomes a candidate for a Unitialized Shift Register (USR).

    - I either initialize this USR array once using the "First Call?" primitive, or I turn the subvi into a small Action Engine with explicit cases for "Initialize" and "Process Data". It depends whether I can live with the memory allocation delay on the first call or not.

    If I'm following your app right, this advice may not seem directly relevant. Sounds like you've got some mini database-like sets of data / information. Your program extracts chunks of this information for processing. The point of concern is where this chunk of data is created, where data copying is necessary and memory allocation may be possible.

    My only thought there is to see whether you can combine Ben's action engine suggestion and my thoughts at the top of this post. If one of those USR's is fed directly into an output indicator, that may give LV enough clue that it can keep re-using the same memory space for that indicator on subsequent calls (it could conceivably see that the USR data and the indicator data are of the exact same size, and realize that it only needs to copy rather than allocate).

    However, all the ways LV optimizes for memory allocation are still partly a mystery to me. Sometimes I find I've done extra work to be explicitly careful about memory, only to find that using simpler sloppy-seeming built-in functions still works better anyway.

    -Kevin P.

  14. ...I do not want to store the continuous data but go in every second or more and take 1000 samples to use.

    I have also attempted to change the "RelativeTo" property to "MostRecentSample" which I thought had fixed the problem however after it ran for sometime I encountered the same error. I left the "Offset" property at the default 0...

    I've done this kind of app before -- let the card perform continuous A/D so that my app can at any time request some of the samples. I'm not certain it was necessary, but I recall that I configured one of the DAQmx property nodes to "allow buffer overwrite" while setting up the continuous data acq task. Despite setting this property, I was surprised that DAQmx still gave me errors if I let the task churn away on A/D conversion in the background without requesting data for a while.

    So the other thing I did was to set "RelativeTo" = "MostRecentSample" and "Offset" = -(# to read). That combo would give me the most recent chunk of data that was already available in the buffer, without having to wait. I found I could let the task churn for several buffer fill intervals before reading data without getting the buffer overwrite error.

    I'm not 100% sure of the interaction between those properties and the behavior of the error codes though. I'd have thought your approach with Offset=0 should be ok too, it'd just make you wait for the most immediate future data. Anyhow, you might try these things and see if they help your app.

    -Kevin P

  15. Herbert,

    Thanks for the support you give on the TDMS functions. I'd like to verify something before going much farther on a new app.

    I've already developed an app under LabVIEW 8.20 and DAQmx 8.3 which depends on TDMS data storage. It's now out in the field and under config control -- i.e., no unnecessary changes allowed. I'm about to make a very slight variation of that app but will need to develop with a different PC and a new LabVIEW license (long story -- contracts, accounting, etc.). I expect that I'll be receiving LabVIEW 8.2.1 and DAQmx 8.5.

    I'm planning to make a stand-alone utility for reading, viewing, and exporting portions of the TDMS data. I'd like to be able to deploy this stand-alone app back to the older computer out in the field. What are my options? I'm thinking it's something like:

    1. Talk to NI internal account people so I can install older LabVIEW 8.20 on my newly purchased license rather than 8.2.1 (Would also install DAQmx 8.3 instead of 8.5). This seems like an easy way to be sure that my stand-alone executable will be compatible with the older deployed PC.

    2. Go ahead and install LabVIEW 8.2.1, but stick with DAQmx 8.3. Then I'd be building an 8.2.1 executable that I want to deploy to an older PC with 8.20 runtime. Will this work, both in general due to runtime version difference and in particular with respect to the TDMS functions? Bear in mind that the older PC is creating the TDMS files with a LV 8.20 app.

    -Kevin P.

  16. Several comments in no particular order:

    1. It appears that the same 2 edges are being measured in both counter tasks. I'd have thought they'd share the same start edge but measure to different ending edges.

    2. You're relying on software timing to keep readings in sync. I think you might better perform buffered measurements. This would require calls to "DAQmx Timing.vi". You'd also want to call "DAQmx Start.vi" directly, rather than counting on the first "DAQmx Read.vi" call to do an auto-start.

    3. If you're buffering values with the hardware, you may want to rethink your 'timeout' input and/or # of samples to read at a time.

    4. There's still a subtle synchronization issue that could bite you if you aren't careful here. There's a possibility that the drive signal will produce the start edge after one counter task starts but before the other counter task starts. I *think* your device will support sync'ing the start time of the two tasks using an "arm start" trigger. I don't have time to explain it all now, but you can find more info on the NI site, especially the forums there. You'd probably want to generate this trigger edge yourself, perhaps with a DO bit, after both tasks have been started but before you attempt any readings.

    -Kevin P.

  17. I skimmed the manual a bit. An awful lot of low-level communication code has been implemented in the 6k driver, but I didn't happen to spot much higher-level code. You know, the kind that could be especially helpful for implementing a motion control app. So you may still need some learning curve for the 6k controller itself, learning how to use the 6k driver to implement a particular motion sequence. But again, I only skimmed.

    Since Viewpoint happens to be local to my area, I *can* say that they're definitely a quality outfit. I haven't used the 6k library myself, but I've seen some of their custom one-of-a-kind work and wouldn't hesitate to recommend them.

    Writing good, robust drivers can be a real black hole during app development. If your app needs even a moderate amount of real-time-like interaction with the 6k controller, I'd recommend buying the driver package. If you basically store a bunch of possible profiles on the 6k controller and only rarely interact to select one, you may be ok coding it up yourself.

    -Kevin P.

  18. Back again.

    I found that a little tweak down in the low level Write_Key and Read_Key functions allowed me to properly handle a few DAQmx Global Channels I had embedded in my config clusters. (I had previously been willing to live with losing the info but played around with trying to reconstruct it.)

    In short: the DAQmx Global Channel comes through as a datatype of "Refnum", value=70 (hex). I personally don't have any other cluster elements treated as Refnums, and I tended to doubt many Refnum thingies have any static meaning in a datafile anyway -- most any Refnum I think of is generated dynamically with an "Open" or "Create" type of call.

    Prior behavior: DAQmx Global Channels were identified as "Refnums" and were caught in the "Default" case where they were written as a flattened binary string. The subsequent Read failed to reconstruct the channel name from the flattened binary string.

    My change: both the Write_Key and Read_Key functions supported a case for datatypes identified as "String" or "DAQ". I'm entirely guessing, but perhaps the "DAQ" datatype is some other type of device or virtual channel type, perhaps from traditional NI-DAQ rather than DAQmx? In any event, I included "Refnum" in that list of types to handle. Here the DAQmx Global channel is written as a standard string. A similar change in the Read_Key vi properly reconstructs the channel name.

    I'm attaching the modified files here, but am not at all qualified to judge whether this change is likely to help or hurt other users. It feels risky, though I can't think of other cases where it'd be useful to reconstruct the value of a Refnums in a file. Maybe there's a better way to more conclusively identify the sub-type of the Refnum so it can be treated as a special case?

    -Kevin P.

  19. Thanks all!

    Herbert: Ok, I understand. I'm pretty sure I'm going with the simple flattened U8 array as a TDMS channel in the short term.

    Mike S: You gave me a little something to think about there. I've been studiously avoiding the waveform datatype for years. Most of my data acq has been in the handfuls of kHz and I didn't find the waveform datatype useful. The timestamp t0 was never nearly as precise as the sampling rate and didn't seem worth the bother. Not to mention that it could be very misleading for delayed-trigger acquisitions, marking t0 when you started the task rather than when the trigger came in. I also thought I'd read about inefficiencies in memory and CPU due to its cluster-like nature.

    However, I see that you were able to tuck in some useful "properties" as waveform attributes, which would be another way to get such info cleanly into a TDMS file. In my particular case, the clusters don't readily map to any true waveform data that they could "piggyback" onto, so this time I'll probably go with the brute-force method Herbert showed.

    PJM: That looks like the best solution of all. In the end, the info actually ends up as viewable TDMS properties! Hurray! Er, that is, hurray... if only I did OpenG. :unsure: (Anyone remember the old Schoolhouse Rock "hurray, I'm for the other team...")

    At work, only computers with standard corporate images get to ride on the network. LabVIEW goes on lab machines without any network connections. I could download the VIPM on my network PC, but don't have permissions to perform installs. I could install on my lab machine, but then can't see the internet to get the OpenG packages. And honestly, the old licensing requirements looked kinda painful. Our test config management favors big monolithic unchangeable executables.

    It appears that the recent licensing changes are meant to ease much of that burden, though I've only followed Jim K's announcements superficially and don't know details about which packages have ported over and which haven't. There's probably some kind of workaround for OpenG on non-networked PC's, and now that the licensing is changing, it's probably time to give OpenG some serious consideration again. I've *wanted* to use it, I just didn't want it to have to hurt.

    Thanks again to all responders.

    -Kevin P.

  20. Herbert,

    Thanks for the quick reply. I can see how that can work if I write exactly one time. A couple followup questions:

    1. Please tell me if the following is correct: When using TDMS properties, each Write operations *overwrites* old data related to that named property. When using TDMS channels, each Write operation *appends* to old data related to that named channel.

    Until now, I intended to make use of the (expected) *overwrite* behavior of properties. I planned to write the default clusters prior to starting the test. Then during the test, when certain fields of some clusters needed to be changed due to operator choices or other test conditions, I could just (over)write the cluster to the TDMS file on each change. There are several nearly independent processes running, and I was basically planning to use the TDMS properties as a storage mechanism like a functional global.

    It would appear that if I need to use TDMS channels for this data, I'd better use standard functional globals and then be sure to perform one and only one TDMS write at test completion.

    2. Just curious. I don't *intend* to change the cluster datatypes, but you suggested that you were illustrating a way to handle such changes. Can you explain more -- I'm not seeing it.

    I'm imagining the most likely change where some fields are added or removed from the cluster. The TDMS file was written with the old cluster and I'm now reading it with code based on a new cluster, containing 2 extra data fields. Won't the "Unflatten from String" function fail because the data string based on the old cluster now has the wrong length to be unflattened into the new cluster?

    Or did you mean something different?

    Thanks again! I've got a workable path forward, and understand better why the "properties" were restricted to human-friendly strings.

    -Kevin P.

  21. Ok, so I'm starting to get frustrated. I'm getting down to crunch time on a big project that includes some pretty high-speed data streaming. There's been lots of NI rah rah promotion of TDMS and I decided to buy in. And it's been just fine *FOR ACQ DATA*. But I'm having a heck of a time trying to also jam a bunch of my big configuration clusters (typedef'ed clusters containing typedef'ed clusters, enums, etc.) into any of the TDMS properties.

    I spent the morning trying to "trick" the TDMS Set Properties function. I tried converting my config cluster(s) to Variants, I tried flattening to string, I tried typecasting to string, I tried flattening to string then typecasting to U8 array, and so on. Some of these things claimed to write successfully but of those, NONE would read back successfully -- I would get an error about data corruption.

    I'm very close to bagging the whole attempt and living with the uglier solution of storing acq data and config info in 2 separate files. Has anyone out there found a reasonable way to store complex data structures in a format that TDMS will accept?

    Note: I don't want to have to unbundle every single element of the clusters manually as that will be a maintenance nightmare. Also, anything involving LVOOP wouldn't be practical now -- I don't have time for the needed learning curve. Thanks!

    -Kevin P.

  22. There's tip I read about on ni's forums that sounded like a simple quick-and-dirty solution, but I haven't benchmarked it on large datasets so I can't really vouch for it.

    Anyhow, here goes. The 'Waveform Chart' UI indicator has lossy circular buffering built in. The tip is to hide the control so the CPU never has to draw the data, just buffer it. The "History[]" property can be used to read the data back out. I really don't know how well it would work for your size dataset.

    -Kevin P.

  23. Don't have time to look or think hard, but one little tip to consider:

    I've got an app going with high-speed disk streaming that also pumps filtered and decimated data to the user display in pseudo real-time. One tweak I added to both the filtering and decimation steps was to preserve Min/Max info so as not to lose the kind of outliers that may require operator intervention. I would find Min/Max before filtering/decimation, then substitue those values over top of the calculated ones at the appropriate locations after the filtering/decimation. This isn't needed in all apps of course, just food for thought.

    BTW, I like your idea of re-decimating your history buffer at higher compression each time it fills up! Simple enough concept, but I hadn't ever thought to do it myself before.

    -Kevin P.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.