Jump to content

giopper

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by giopper

  1. Hi Gary, thanks for your answer. No, I didn't know about that, I'm going to try it right now. I'll post here the result. And thanks for the link to those nice TechNet pages. G.
  2. Hi guys, I have a data acquisition system based on a NI-6122-PCI, 4 AI channels simultaneously sampled at 500 kHz over 16-bit. It's a lot of data, 4 x 500 kHz x I16 = 4 Mbyte/s (pure data). I need to transfer those samples (after timestamping and some other fast manipulation) to another machine for further processing, while the data taking is running. The transfer doesn't have to be real-time, but as simultaneous as possible to the data taking. I have a DAQ loop where data are collected from the hardware, one time per second (4x 500000 samples/s) and stored in a FIFO buffer (data-FIFO, an ad-hoc functional global). No problem here. Another loop takes the data from the data-FIFO, does some pre-processing and stores the result in another FIFO, the TCP-FIFO. No problem here. Another loop runs a TCP server: when a client is connected, it takes the data out from the TCP-FIFO and sends them using the LabVIEW native TCP functions. The client is connected through a private Gbit network, a direct cable between two Gbit network adapters, proven to be working properly with other software (although I never really measured the true max throughput). Unfortunately, somewhere there must be a bottleneck because I see that data pile up in the TCP-FIFO, i.e. the transfer from hardware to the TCP server is faster than the transfer to the client via direct cable connection. The TCP data flow is very stable, a continuous flow almost fixed at 15% of Gbit network capability, as measured by WinXP TaskMan, which (in my opinion) is ~15 Mbyte/s, assuming that the max Gbit LAN throughput is ~100 Mbyte/s. The other machine runs a dedicated TCP client written in C++ and running under Scientific Linux 5.4. The client is definitely not the bottleneck. I am afraid that the bottleneck could be in LabVIEW: does anybody know what is the max. throughput of LabVIEW's TCP native functions? Does LabVIEW access the NIC drivers directly or does it use an additional interface layer? If I write/compile my own TCP functions in C and I call them from LabVIEW as external code, will that improve the efficiency of the TCP connection between the two machines? Suggestions to improve the Gbit tranfer rate or ideas about how to improve the data transfer are more than welcome. Thanks G. LabVIEW 8.2, NI-DAQmx 8.6, WinXP32-SP2, Gbit NIC integrated in the mobo (nForce)
  3. QUOTE (cls9215 @ Oct 6 2004, 08:24 PM) I attach a VI that I use to programmatically update the snapshot of a computer's desktop in a webpage :camera: To run this VI you need the http://forums.ni.com/attachments/ni/170/157084/1/Clipbrd.zip' target="_blank">clipbrd.llb library. The VI uses a call to Win32 API function "keybd_event" in user32.dll, therefore it only works under Win32. The "invoke node" method suggested before should be platform independent but does not provide the entire screen. Note: the enumerator used to select the "print screen" key includes many other (probably all) possible values to call the "keybd_event" function (I found it somewhere but I don't remember where, I apologize with the author for the lack of credits :worship: ) I use LV 8.2 and I can down-convert the file only to 8.0. I hope you can find a way to convert it down to LV 7.0 (volunteers?) Have fun :beer: G
  4. QUOTE (Paul_at_Lowell @ Aug 29 2008, 06:41 PM) Paul, could you downconvert to LV 8.2 (version I´m using)? Tnx G
  5. Hi Yair, thanks for your answer QUOTE (Yair @ Aug 31 2008, 08:27 PM) I did it, I can tell you that when the ntpd is properly running (not only correctly configured but also well in sync with the external ntp server) the LV built-in time function gives accurate absolute timestamps, at least accurate enough for my application (better than 1 ms.) QUOTE (Yair @ Aug 31 2008, 08:27 PM) ... Maybe YOU know it. I only see that the displayed resolution is ~16 ms. I know nothing about the accuracy (maybe it's synchronized with sub-ms accuracy?). Ops, you're completely right, I didn't think about it, the two output (absolue time and timer) could actually be internally synchronized, I don't know either. My worry is only that the time series produced using the timer will be shifted by the amount of the error of the first absolute timestamp. QUOTE (Yair @ Aug 31 2008, 08:27 PM) ... The local is unnecessary... wire the data straight into the loop. ... Yes, of course, it's only an example in a test program. I did use the local only to show that the following main loop (where the new absolute timestamps are generated) can be completely disconnected from the first part. It only needs a few numbers that can be passed using a local. QUOTE (Yair @ Aug 31 2008, 08:27 PM) Building arrays in loops is a big no-no as it requires repeated calls for memory allocation and can hurt performance and statistics, especially when trying to time code. I see your point, what do you suggest, then? QUOTE (Yair @ Aug 31 2008, 08:27 PM) When you want to distribute code, it's better if you do a save as and create an LLB, as it's then a single file. Sorry, I am the only direct user of my applications, I'm not used to distribute my code. Next time I'll use a LLB. Anyways, I have some bad news. I found that there is always a drift between the absolute time generated by the LV built-in function and the absolute time generated with my code. My explanation is that the error in the determination of the parameters of the linear relationship, although very small, on the long run can produce a large time error (I got 50 ms over one hour.) Another explanation is that only the PC clock is drifting (after all, 50 ms/h = 1.2 s/day only) but at the moment I have no way to verify this. In other words, I will probably give up and go back to the hardware solution, using the IRIG-B code from the local GPS clock. G
  6. QUOTE (giopper @ Aug 30 2008, 05:50 AM) Forgot to attach "processor_speed.vi" subVI [ Main: test_QPC_absolute_time.vi SubVIs: processor_speed.vi qpc_timer.vi qpc_absolute_timer.vi ] G
  7. QUOTE (Raymond Tsang @ Aug 25 2008, 12:12 AM) Hi Ray, the error is at Open/Create/Replace file, but you do not create neither replace, therefore the error is when opening the file, probably still open from the previous iteration (fwer it takes longer than 1 s to close it.) Please, try to put the "Open" and "Close" primitives outside the main loop. Best G
  8. QUOTE (Yair @ Aug 24 2008, 02:59 PM) Hi Yair, thank you very much for your contribution. Unfortunately, your VI is a bit too simple: :question: what I need is a time generator with about 1 ms accuracy AND ALSO <1 ms absolute error from a traceable source. Your VI is the simplest way to merge absolute time with accurate timer ticks. However, we know that the absolute time comes with a 10-20 ms error, therefore the following time stream, although produced using the timer (1 ms relative accuracy) will be affected by that error and the error will change from one call to another (inside a +/- 15.6 ms interval.) This makes the time series unreliable (not in sync with external absolute time sources like GPS time.) QUOTE (Yair @ Aug 24 2008, 02:59 PM) In any case, you can't expect to get ms accuracy in a desktop OS. While the CPU is certainly capable of it, the OS is not designed to guarantee it. ... I agree with you as long as we only talk about Win32. I can tell you that under Linux + ntpd server properly running you can get better than 1 ms simply using the LV built-in time function. What I actually wonder is if there is any (simple) way to get 1 ms accurate AND reliable timstamps under Win32 as I get under Linux (from exactly the same hardware!) I attach my last test VI (8.0) which gives about 1 ms accuracy absolute time, using the QPC timer and converting it to absolute time by using an ad-hoc relationship (does not handle the rollover.) [ Main: test_QPC_absolute_time.vi SubVIs: qpc_timer.vi qpc_absolute_timer.vi ] It's not elegant but I think it does the job: in the end we get accurate+absolute timestamps. Obviously, it's a starting point and can be improved, please let me know your opinion and any comment/suggestion. Thanks, G
  9. QUOTE (Raymond Tsang @ Aug 20 2008, 12:20 AM) Hi Raymond, in your subVI "logger(TXT)" you open the log file, point to its end, write the new data and close it. You do this operations each time the subVI is called, which probably means every second. Although it shouldn't be a problem, I suggest to move the "open" before the main loop and the "close" after exiting the main loop. In fact, you do not need to open/close the file each time you have to write new data: you can open it once at the beginning and pass the file reference to the subVI which will be used to write the data every second. The file will be accessible anyway (e.g. with a text editor like Notepad) even when it is already opened by LV. When you stop the main loop, then you close the file. Probably this is not related to your "file IO error" but still I would avoid to open/close every second. G
  10. QUOTE (Ben Zimmer @ Aug 18 2008, 01:49 PM) In one of my DAQ systems I transfer data packages from the DAQ loop to a TCP-server loop via a functional global used as FIFO buffer. Each data package is a 2D array, AI channels x samples, therefore the buffer uses a 3D array where the third dimension is the position of the 2D package in the buffer. Simple and efficient. G
  11. QUOTE (PJM_labview @ Aug 22 2008, 06:49 PM) Hi, similar "recovery error" happened to me a few times... I agree with PJM: by clicking the "cancel" button on the first dialog the backup files are moved (not deleted) to the archive directory. Why then the same option is not given on the second dialog? Btw., what did you find in that LVAutoSave\errors directory? Any file without a name? G
  12. Here I am again, Yair, Michael, thanks for your replies. QUOTE (Yair @ Aug 21 2008, 03:07 PM) No way to change that, I know. I also tried calling directly the Win API GetSystemTime but I get exactly the same accuracy (~16 ms), I guess it is a Win limitation. I can tell you that under Fedora 7 I didn't have problems to get ~1 ms accuracy, same code, same LV functions. QUOTE (Yair @ Aug 21 2008, 03:07 PM) If you want to get ms values, you can use the tick count primitive ... Do you know an easy way to get absolute time from the tick counter? I did it in the following (definitely not easy) way: 1) in the initialization part of my code I use a loop to collect absolute timestamps and timer counts, (hopefully) simultaneously, for a few seconds at a few ms intervals: it produces two arrays of some-1000 elements, one has the timer ticks, the other one the corresponding absolute timestamps. 2) I then use the linear fit function to define the linear relationship between the two arrays. Although the absolute values have 10-20 ms accuracy, on the average over a few seconds they well represent the actual absolute time sequence. 3) in the main part of the code, then, I only use the tick counter and the previously defined linear relatioship to calculate the absolute time corresponding to timer ticks. I get absolute timestamps, with ~1 ms accuracy, very close to be correct, although I found a small offset (<2 ms) which is constant during execution but changes from one run to the other ... In principle can be compensated... QUOTE (Yair @ Aug 21 2008, 03:07 PM) If you want higher accuracy, you can call the queryperformancecounter API function for a microsecond resolution, but unless you can sync it correctly, I doubt it would help you. I already wrote some code to use the QPC API, it works great. I'm going to try it with the same synchronization procedure described above for the LV timer. QUOTE (mross @ Aug 21 2008, 04:29 PM) OK. I think you cannot do this without hardware in your PC to get the tracable timestamp. Then you have to synchronize it with the triggering of the acquisition. This is not so easy as far as I know. ... That's the way I did it in the past, with the help of this PCI module which needs to be conected to the station GPS clock and uses the external hardware trigger to latch the event time with 100 ns resolution. Kind of increases the complexity of the system :thumbdown: I'd be happy to get rid of it... QUOTE (mross @ Aug 21 2008, 04:29 PM) I have seen discussions of this business with the absolute timestamp, but I have never done it myself. It is a difficult complication that I have tried hard to avoid. you clever guy I'm going to do some more tests of the synchronized QPC, then I'll post here some code to show you how it works, get your feedback and possibly improve it. Thanks again for your attention, G
  13. Thanks Michael for your reply QUOTE (mross @ Aug 21 2008, 01:06 AM) OK, I'll try to better focus on what I actually need QUOTE (mross @ Aug 21 2008, 01:06 AM) ... b) How about better than 1ms relative to the original start time and and start time is nailed down and tracable? ... The b) scenario corresponds to my case. My data stream must be synchronized to some other data, provided by another piece of hardware, and the sync is done by matching the two timelines. The data stream is not continuous but in a sequence of data packages, triggered by an external hardware device. Some numbers: 120 AI sampled at 500 Hz, 220 ms package, and about 40 ms pause between packages. For each package I need at least one absolute timestamp: the other 109 timestamps are at 2 ms regular interval. QUOTE (mross @ Aug 21 2008, 01:06 AM) ... you trust the first time stamp and its relationship to the starting of the timer you have a very good idea when each of the events occured and can easily add them to the time stamp for a pseudo absolute time. I love that, "pseudo absolute." ... Exactly: I trust the clock of my DAQ hardware, I know the sample rate therefore I have no problem to know the relative timestamps with an accuracy that is enough for my application. The problem is to have at least 1 absolute accurate (1 ms) time reference with a well known (again 1 ms accuracy) relationship to my time series. In the past, I did use IRIG-B time code given by the station GPS clock (decoded by a PCI board) to get the absolute time. However, I found that under Linux Fedora7 + LV 8.2 + NI-DAQmx 8.0 I can use the LV built-in time function to get a "pseudo-absolute" timestamp which is "absolute" enough for my application (as long as the ntpd server is running and properly working.) For some other reason, I was forced to move to WinXP32 and surprisingly I do not get the same accuracy anymore (at least a factor 10 worse) although using exactly the same hardware. I really would like to get rid of that IRIG-B board and under Linux it was actually possible. Questions: a) how to produce 1 ms accuracy absolute timestamps under WinXP32 without using additional/specific hardware? b) why is the LV built-in time function a factor of 10 worse under WinXP32? Thanks G
  14. QUOTE (Raymond Tsang @ Aug 20 2008, 08:13 PM) What about some power saving settings (e.g. switch off the hard disks) in the BIOS? Do you get any error message from LV? G
  15. In my data acquisition system I need absolute time references with about 1 ms accuracy. I first developed some code under Linux Fedora7 and the LV built-in time function was accurate enough for my application. When I use the same code under WinXP32 (running on the same hardware!) however I do not get better than about 15 ms. I did try LV 7.1, 8.0 and 8.2: same result. Is it a Win32 API problem? Is there any known way to get absolute timestamps under WinXP32 with 1 ms accuracy? The LV timer seems accurate enough: is there an accurate and reliable way to convert timer ticks to absolute time? And what about the use of the Windows API function "QueryPerformanceCounter" to get even more accurate ticks? Thanks G
  16. QUOTE (MJE @ Aug 4 2008, 10:41 PM) You can change the values of controls embedded in a cluster by using bundles. However, you cannot change the definition of the cluster (i.e. the definition of the type of a control in a cluster and I guess this was actually your question.) As for a structure, a cluster must be declared, e.g. you must define it before you run the VI and there is no way to modify its definition programmatically. In the initialization part of your program, you can define a very general cluster, a cluster with all the types you could need during execution and then use case structures to decide which control to use/modify. Definitely not elegant but it works. Hope this helps. G
  17. I purchased LV 8.0 Full Development System AND LV 8.0 Application Builder, upgrades from 7.1., Windows version. Of course, I have the corresponding "certificates of ownership" with S/N + P/N. I received a box, from NI, including three CDs, two of them contain drivers, the third one is "LabVIEW 8 Development System and Application Builder for Win...". The title says clearly that it includes the application builder. I installed LV 8.0, no problem, licensing and registration OK. I didn't find a way to install the "Application Builder". Actually, it looks to me as if it is not present on the CD at all. Does anybody know how does it work? Thanks Gio
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.