Jump to content

mseb

Members
  • Posts

    11
  • Joined

  • Last visited

    Never

mseb's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Well, I used to like this tool very much, I however use LabVIEW from times to times only and it seems that it does not work with LV 8.2 anymore. Did you, by chance "upgrade" it ?
  2. So, welcome ! Ok, let's try to analyse your problem. It would be much easier to answer with the code rather than the screen caps, anyway, I'll try. First are those analysis results graphs or scalar values ? If they are scalar, I wouldn't bother how often they are refreshed, considering the other operations (DAQ / analysis), I'd say that your bottleneck is not there. Since you seem to have a problem, I'll guess there are at least a few graphs, which, indeed will tend to slow your vi if refreshed too often. The problem here is that displaying a graph is time consuming. I've faced similar problems (in fact, I'm currently cleaning my own vi's before posting them for comments), here are a few advices I'd give, I'll try to sort them in an easier-to-implement (I guess you get the meaning) way : - if you have multiple charts, maybe you can combine the inputs in order to have less different charts (BTW, charts have an option to stack the differents plots, that I find handy for that case) - autoscaling graphs and charts is time consuming, maybe turning autoscale off can help. - I've found that charts that have a large history length can be really time consuming. - it's usually not needed to display all of the data, you can decimate the data before displaying. - you can choose to display the graphs only on some iterations (say, divide the frequency of your main loop by 500) - you can use the producer / consumer design pattern in order to decouple the display from the acquisition / streaming Those are the few ideas I get, you should be able to dig this way, read examples from NI, LV help is really great ! I'm short on time right now, but you can look for the examples I've posted here, the wiring is horrible, but you could get a few points. One final question I have is why do you need to loop at 1 KHz ? It's often easier to get more data at lower frequency than to get less data at higher frequency. It would be much easier to help you with the code.
  3. Well, I'm not that sure. Of course, this code will be much more enticing when cleaned. In fact, it was clean and then I added a few things and it became that huge ugly thing you've seen. But I'd really like to end with something that helps me produce clean code from the start. Ok, I agree with you. Let's say that I prefer to keep it short if possible. Sorry to insist but I really don't get the point. What is crucial is not loosing data, both on the acquisition and loggin. The easy way is to have enough acquisition buffer to allow for irregularities in logging time and to write the data ASAP. The exemple misses the logging part, but you can easily imagine a "write to binary file" vi wired to the "red samples" vi. Placing these vi's in the master loop assures that it is executed when possible. Timing is commanded by the acquisition itself. Why would you time more ? Well, I don't have it under the eyes right now, but yes, the logging part was surely missing. As I've said upper, just imagine a binary file write wired at the output of the DAQmx read. Yes, a single loop cleans the clutter of the queues, but as soon as the analysis duration is in the order of the acquisition length, the buffers starts to fill, and since you make the same number of acquisitions than that of anlysis runs, it never empties, or you need to add clutter / overhead... Yes, of course (the flat sequence). I'll be back tomorrow with a cleaned diagram. But for the code to put in subvis, do you have an idea ? I have none.
  4. Well, I don't like it but I guess I'll need to show the code. It is attached, sorry but it's WIP and most of the command names are in French.... Thank you for confirming. No, I don't Have a look at the attached vi's. But it was well tried. I used to put all the code in the main vi when the dataflow was "broken" by not-well-chosen anlysis routines. It's not the case anymore. It's exactly what I've tried to do However, I may not be very good at deviding the code in logical parts (and I'd like to, since my problem would shift from design to implementation, which seems much more esay to solve). You're free to have a look at the details of what I'm doing, I can explain my needs longer if needed. It does, but as soon as I add a few different analysis, the diagram starts to become unreadable (yes, it's actually really dirty). That's really what I'm trying to avoid. I want to reasonably push the point until which it works out of reach. This aproach gives the most unreadable code I can provide, as soon as data analysis becomes more than obvious, or am I missing something ? Maybe you can show an example ? As for the timed loop : I do not understand why it would be more desirable than a while loop reading all samples. This way, the timing is implicit : it's that of your acquisition. Well, I do not want to loose any data point, I need to keep the RAW data for further analysis. Averaging is not an option here. But data streaming bandwidth isn't either since the DAQ device has a bandwidth of 250 kS/s. I've studied the details of NI's GigaLV vi's and in my case, the decimation is implicit : I acquire blocks of data (typically : 65536 points / block), and show the results at the end of those blocks, which leaves me with only a few thousand points, depending on the application. I think you have spotted the problem : it depends on what to do. What I'm coming for, here, is a generic solution : one which would be easily adapted to the specific task, that which would make the best use of both cpu, RAM and humain brain. Any idea / proposition ? Download File:post-5996-1160392193.llb
  5. Hello all, It's the first time I ask for advice here, I hope I've done my homework well enough... I'd like to have your feeling about the structure of an application I need. I have a problem that I've often encountered, but I've never solved it in a way that satisfies me and I'm looking for a kind of general solution. I want to continuously acquire data with a DAQ card, eventually log this data to disk, perform analysis, display the results and eventualy log the results. I'm usually using low-end / old DAQ devices, which means that the amount of data is quite manageable with a single standard PC. For exemple, I'm now using a PCI-MIO-16E-4 (250 Ks / s, 12 bits data) and a pentium 4 running at 2.8 GHz using 2 G of RAM with a standard disk. When the non-critical overhead (analysis / display) is not too large, I can easily do all this using a single while loop, as shown in NI's examples for data acquisition but as I add overhead (increasing the anlysis complexity or adding several graphs), I usually get an unreadable diagram, complex data flow and finaly, I'm really close to the hardware limits (that is I have to get a HUGE acquisition buffer not to loose data). I don't like this way, since it's so easily problematic. Thinking at the task I need to do and increasing a little my knowledge of LV, I found the master / slave design pattern to be quite suitable for my needs : the master loop acquires data blocks continuously and then pushes it in a data queue which is later analysed. Since both loops might execute in parallel, It does not hurt me to stream the data to the disk in either of those loops, depending on which one slows down the execution. I can refine things further and add a third loop (timed loop) to refresh the graphs (I pass the analysis results using a second queue, containing only one element, the results to be displayed). This works really well in my case and I can perform a lot of analysis without running out of cpu / RAM aswell as keeping an enough-refreshed screen. However, as I've added complexity to the analysis part, the diagram as becommed... awfull If I'm not clear enough, I can provide you this vi, but I really do not like to show that ugly code. I'd like to find a better way to do things. In its current state, my vi is efficient enough. There may be more optimizations to do, but first I've not found them, second I've no __need__ for them. My main concern is with what you guys seem to know well : code (re?)usability, maintainability etc. Do anyone have an idea where to dig ? Maybe some of you have a kind of generic solution ? Do you think It is possible to seperate the problem in three different programs, keeping the same data flow : one vi could perform the acquisition and push data in a "global queue" while the analysis vi would pull the data and push the results in a "display queue". I've no idea if this is realistic or how to implement it, so any advice / pointer / example would be welcommed. My current vi is available if needed.
  6. Thank you for pointing this, it's a real goldmine. I knew the three you cited, but didn't know this one : If you lie to the compiler, it will get its revenge from Henry Spencer. My favorite, one, definitely.
  7. I've discovered these forums only recently and I'm still going through all these informations, so the technical discussions will have to wait a little, until I understand all of this. But I wanted to agree with you : it took me weeks to realize that the really simple statistics I was performing on "live" data were *really* slowing down the experiment. One day I decided to evaluate nb of points * number of acquisitions / sampling frequency. It was around 1 minute while I was looking at the graphs building for more than 20 minutes or more. All of this was possible only by the huge buffering, of course. Of course, I was refreshing many graphs thousand times, of course my algorythms were ugly. I've done my homework, went back to the wiring board, wrote a few maths lines and all is much better. Untill now, I was still lacking an "inline" tool though, untill now...
  8. mseb

    Hello world

    Well, I hope I won't have to but who knows ? Might be helpfull one day, the translation is direct : "Bienvenue
  9. mseb

    Hello world

    Thank you very much. I'm happy to be here Nice, really... Do you opensource this ? I hope so, although I'm quite sure I'll get more than I'm able to give, it's the advantage of the community, isn't it ?
  10. mseb

    Hello world

    Hello all ! I've recently registred with these forums, my first PM by M. Aivaliotis (not sure if this is an automatic one, though) was advising me to introduce myself here, so here I am. I'm not a computer scientist but, I'm a physicist (specialised in optics). I've been using LV for a few years, in order to develop and bring automation to optics experiments for R&D purposes. Over the years, I've become more and more interrested in this part of my work. I've started to be really interrested in "wire engeneering" and code efficiency this year, after having had real trouble doing what I wanted with limited hardware resources. I've been pointed to the LAVA forums by someone on NI forums, while I was talking about optimization and execution speed. Reading the archives have really been usefull to me : thank you all for this. I hope to be able to take an active part in discussions and to contribute code, I really like the idea of open LV code. Finally, I'm not a native english speaker, so I hope you'll forgive me for the poor-man english. Ok, back to work now, I have an awfull diagram to clean
  11. Hello, this is my first post here, I hope not to be completely NAN for this first contribution I'm trying to find my way on the subject of both good programming techniques and nice execution speed using LV. Since I read this discussion, I decided to use formula nodes when doing heavy computations, but I had not looked at the details of the test vi, I had just verified that using a formula node instead of "standard" functions was, indeed much faster. Looking at it today, I was surprised to see that the Formula Node 2 (arrays outside the formula node) was using three array indexing operations. Am I the only one who use auto-indexing ? If you do it, the results for this case improve of about 30 %, (on my machine, the "Factor" drops from 97 to 67), which makes this solution rather competitive IMHO with the F95 or C DLL's. BTW, using the auto-indexing trick does not produce such an impact on all test cases (at least those I've tried). I thought this was worth writing my first post, so here it is If anyone has a generic solution, keeping both code readability and execution speed, I'd love to hear about it : I **hate** formula nodes, they're really too large for my screen and I can't type two lines of code without introducing three bugs ! Thank you for having read me thus far, and thank you to all of the contributors to these forums, it's been a fascinating and fruitfull reading for me. I hope I'll be able to give you something back.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.