mseb Posted October 9, 2006 Report Share Posted October 9, 2006 Hello all, It's the first time I ask for advice here, I hope I've done my homework well enough... I'd like to have your feeling about the structure of an application I need. I have a problem that I've often encountered, but I've never solved it in a way that satisfies me and I'm looking for a kind of general solution. I want to continuously acquire data with a DAQ card, eventually log this data to disk, perform analysis, display the results and eventualy log the results. I'm usually using low-end / old DAQ devices, which means that the amount of data is quite manageable with a single standard PC. For exemple, I'm now using a PCI-MIO-16E-4 (250 Ks / s, 12 bits data) and a pentium 4 running at 2.8 GHz using 2 G of RAM with a standard disk. When the non-critical overhead (analysis / display) is not too large, I can easily do all this using a single while loop, as shown in NI's examples for data acquisition but as I add overhead (increasing the anlysis complexity or adding several graphs), I usually get an unreadable diagram, complex data flow and finaly, I'm really close to the hardware limits (that is I have to get a HUGE acquisition buffer not to loose data). I don't like this way, since it's so easily problematic. Thinking at the task I need to do and increasing a little my knowledge of LV, I found the master / slave design pattern to be quite suitable for my needs : the master loop acquires data blocks continuously and then pushes it in a data queue which is later analysed. Since both loops might execute in parallel, It does not hurt me to stream the data to the disk in either of those loops, depending on which one slows down the execution. I can refine things further and add a third loop (timed loop) to refresh the graphs (I pass the analysis results using a second queue, containing only one element, the results to be displayed). This works really well in my case and I can perform a lot of analysis without running out of cpu / RAM aswell as keeping an enough-refreshed screen. However, as I've added complexity to the analysis part, the diagram as becommed... awfull If I'm not clear enough, I can provide you this vi, but I really do not like to show that ugly code. I'd like to find a better way to do things. In its current state, my vi is efficient enough. There may be more optimizations to do, but first I've not found them, second I've no __need__ for them. My main concern is with what you guys seem to know well : code (re?)usability, maintainability etc. Do anyone have an idea where to dig ? Maybe some of you have a kind of generic solution ? Do you think It is possible to seperate the problem in three different programs, keeping the same data flow : one vi could perform the acquisition and push data in a "global queue" while the analysis vi would pull the data and push the results in a "display queue". I've no idea if this is realistic or how to implement it, so any advice / pointer / example would be welcommed. My current vi is available if needed. Quote Link to comment
Jeffrey Habets Posted October 9, 2006 Report Share Posted October 9, 2006 The master slave pattern you are using now is perfectly fit todo what you want. Reading you message, it appears to me that you are trying to put al your code in the toplevel VI. I think you should have a close look at your code (especially the analysis part) and try to refactor parts into sub-VIi's. Try to identify parts of the code that are logically nameable, decide on what inputs and outputs that part should have, and create sub-VI's for those parts. Use good discriptive icons for the sub-VI's. This should clear up your toplevel VI while still understanding what's happening, just by looking at the toplevel diagram. Quote Link to comment
bsvingen Posted October 9, 2006 Report Share Posted October 9, 2006 My experience is to have one loop for logging and saving data. That loop has to be a timed loop. If you have to do analysis "on the fly" on the raw data, do that in the same loop as well if the analysis is not too demanding and/or the data throughput not too high. The same goes for displaying. This works up to a point. Then put only logging and saving in that loop (if you have to save all the data), othervise do only logging in that loop and average the data for saving in another loop (send only the averaged data with a queue or FG, do not use point by point averaging but use a counter to average each 10, 100 or whatever). For displaying you can use the same loop as saving, but display only a small fraction by averaging or decimating, don't use point by point here either. The basic idea is to minimize the workload by decimating and/or averaging, and to separate the logging loop from the other loops. But it depends on the requirements. If you have to save all the data, then you have to it, and this will restrict your performance. Displaying can alsways be decimated and analysis can be done after the logging is finished. There is no simple answer to this, you have to analyse what absolutely has to be done on the fly, and then cut down on the rest. Quote Link to comment
mseb Posted October 9, 2006 Author Report Share Posted October 9, 2006 Well, I don't like it but I guess I'll need to show the code. It is attached, sorry but it's WIP and most of the command names are in French.... The master slave pattern you are using now is perfectly fit todo what you want. Thank you for confirming. Reading you message, it appears to me that you are trying to put al your code in the toplevel VI. No, I don't Have a look at the attached vi's. But it was well tried. I used to put all the code in the main vi when the dataflow was "broken" by not-well-chosen anlysis routines. It's not the case anymore. I think you should have a close look at your code (especially the analysis part) and try to refactor parts into sub-VIi's. Try to identify parts of the code that are logically nameable, decide on what inputs and outputs that part should have, and create sub-VI's for those parts. Use good discriptive icons for the sub-VI's. It's exactly what I've tried to do However, I may not be very good at deviding the code in logical parts (and I'd like to, since my problem would shift from design to implementation, which seems much more esay to solve). You're free to have a look at the details of what I'm doing, I can explain my needs longer if needed. This should clear up your toplevel VI while still understanding what's happening, just by looking at the toplevel diagram. It does, but as soon as I add a few different analysis, the diagram starts to become unreadable (yes, it's actually really dirty). My experience is to have one loop for logging and saving data. That loop has to be a timed loop. If you have to do analysis "on the fly" on the raw data, do that in the same loop as well if the analysis is not too demanding and/or the data throughput not too high. The same goes for displaying. This works up to a point. That's really what I'm trying to avoid. I want to reasonably push the point until which it works out of reach. This aproach gives the most unreadable code I can provide, as soon as data analysis becomes more than obvious, or am I missing something ? Maybe you can show an example ? As for the timed loop : I do not understand why it would be more desirable than a while loop reading all samples. This way, the timing is implicit : it's that of your acquisition. Then put only logging and saving in that loop (if you have to save all the data), othervise do only logging in that loop and average the data for saving in another loop (send only the averaged data with a queue or FG, do not use point by point averaging but use a counter to average each 10, 100 or whatever). Well, I do not want to loose any data point, I need to keep the RAW data for further analysis. Averaging is not an option here. But data streaming bandwidth isn't either since the DAQ device has a bandwidth of 250 kS/s. For displaying you can use the same loop as saving, but display only a small fraction by averaging or decimating, don't use point by point here either. I've studied the details of NI's GigaLV vi's and in my case, the decimation is implicit : I acquire blocks of data (typically : 65536 points / block), and show the results at the end of those blocks, which leaves me with only a few thousand points, depending on the application. The basic idea is to minimize the workload by decimating and/or averaging, and to separate the logging loop from the other loops. But it depends on the requirements. If you have to save all the data, then you have to it, and this will restrict your performance. Displaying can alsways be decimated and analysis can be done after the logging is finished. There is no simple answer to this, you have to analyse what absolutely has to be done on the fly, and then cut down on the rest. I think you have spotted the problem : it depends on what to do. What I'm coming for, here, is a generic solution : one which would be easily adapted to the specific task, that which would make the best use of both cpu, RAM and humain brain. Any idea / proposition ? Download File:post-5996-1160392193.llb Quote Link to comment
bsvingen Posted October 9, 2006 Report Share Posted October 9, 2006 After reading your post more carefully and looking at your code, it seems to me that what you need most is a basic cleanup of your code. That is just a matter of structuring your wires, replacing sub vis etc so it looks like a nice print board. IMO the size really doesn't matter as long as it is structured, but that is more a matter of taste. I am not that organized myself. As long as i relatively quickly can follow what i have done, i'm OK. The timed loop is essential for continuous logging, or you will sooner or later end up in buffer problems and timing glitches in between the n numbers you log during one iteration of the loop. If i understood your code correctly, you are not really doing a continous logging, but more of a batching process. You are logging some data, then this data is sent for analysis and then sent for storage and display, and then you log some more data etc. I would prefer doing this in one single loop, since this will clear your diagram from the clutter of all the queues that really are not neccesary (if i understood your diagram correct that is). Maybe you could use a flat sequence within the loop to visually separate the different processes, and put some more of your code in sub VIs. Quote Link to comment
mseb Posted October 9, 2006 Author Report Share Posted October 9, 2006 After reading your post more carefully and looking at your code, it seems to me that what you need most is a basic cleanup of your code. That is just a matter of structuring your wires, replacing sub vis etc so it looks like a nice print board. Well, I'm not that sure. Of course, this code will be much more enticing when cleaned. In fact, it was clean and then I added a few things and it became that huge ugly thing you've seen. But I'd really like to end with something that helps me produce clean code from the start. IMO the size really doesn't matter as long as it is structured, but that is more a matter of taste. I am not that organized myself. As long as i relatively quickly can follow what i have done, i'm OK. Ok, I agree with you. Let's say that I prefer to keep it short if possible. The timed loop is essential for continuous logging, or you will sooner or later end up in buffer problems and timing glitches in between the n numbers you log during one iteration of the loop. Sorry to insist but I really don't get the point. What is crucial is not loosing data, both on the acquisition and loggin. The easy way is to have enough acquisition buffer to allow for irregularities in logging time and to write the data ASAP. The exemple misses the logging part, but you can easily imagine a "write to binary file" vi wired to the "red samples" vi. Placing these vi's in the master loop assures that it is executed when possible. Timing is commanded by the acquisition itself. Why would you time more ? If i understood your code correctly, you are not really doing a continous logging, but more of a batching process. You are logging some data, then this data is sent for analysis and then sent for storage and display, and then you log some more data etc. I would prefer doing this in one single loop, since this will clear your diagram from the clutter of all the queues that really are not neccesary (if i understood your diagram correct that is). Maybe you could use a flat sequence within the loop to visually separate the different processes, and put some more of your code in sub VIs. Well, I don't have it under the eyes right now, but yes, the logging part was surely missing. As I've said upper, just imagine a binary file write wired at the output of the DAQmx read. Yes, a single loop cleans the clutter of the queues, but as soon as the analysis duration is in the order of the acquisition length, the buffers starts to fill, and since you make the same number of acquisitions than that of anlysis runs, it never empties, or you need to add clutter / overhead... Maybe you could use a flat sequence within the loop to visually separate the different processes, and put some more of your code in sub VIs. Yes, of course (the flat sequence). I'll be back tomorrow with a cleaned diagram. But for the code to put in subvis, do you have an idea ? I have none. Quote Link to comment
bsvingen Posted October 9, 2006 Report Share Posted October 9, 2006 I thought the timed loop was the easy way to prevent the buffer from filling up and the logging from lagging behind. At least it gives you the possibility to monitor the performance. Anyway, i have just looked at your code rather shortly, not studied it, but it looked to me like some structuring and placing it in one single loop would make it much more readable. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.