Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. With output auto-indexing disabled, wouldn't the indicators outside the loop kick in compiler optimizations? Anyway, a queue in this case seems a better option.

    Yes. That's what you want, right? Fast? Also. LV has to task switch to the UI thread. UI components kill performance and humans cant see any useful information at those sorts of speeds anyway ( 10s of ms ) . If you really want to show some numbers whizzing around, use a notifier or local variable and update the UI in a separate loop every, say, 150ms.

  2. thanks for your advice. but the sqlite can't run when the version is below labview 2009.besides,

    Some people have successfully back-saved to earlier versions of LabVIEW. There are certain features of the API that use methods that weren't available in older versions of labview but, if I remember correctly, there were only 2 or 3 of them (mainly using recursion)

    how about the HDF5?if the sqlite is easier than HDF5?

    HDF5 is a file format. SQLite is a database. Whilst SQLite has it's own file format it has a lot of code to search, index and relate the data in the file. You will have to write all that stuff to manipulate the data contained in a HDF5 file yourself.

    thanks. if the sqlite will influence make a .exe installer?

    Not sure what you are asking here. Can you make an exe? Yes. Do you need to add things to an installer? Yes-the sqlite binary.

    If you had complex record types I would agree but this is just straight numeric data. A binary file is not that hard to work with and gives high performance random access with a smaller footprint than databases because it doesn't have all the extra functionality we are not using

    Yup. Looks really easy :) Now decimate and zoom in a couple of times with the x data in time ;) (lets compare apples with apples rather than with pips)

    What I was getting at is that you end up writing search algos, buffers and look-up tables so that you can manipulate the data (not to mention all the debugging). Then you find its really slow (if you don't run out of memory), so you start caching and optimising. Databases already have these features (they are not just a file structure) , are quick and really easy to manipulate the data with efficient memory usage. Want the max/min value between two arbitrary points? Just a query string away rather than writing another module that chews up another shed-load of memory.

    Having said that. They are not a "magic bullet". But they are a great place to start for extremely large data sets rather than re-inventing the wheel, especially when there is an off-the-shelf solution already.

    (TDMS is a kind of database by the way and beats anything for streaming data. It gets a bit tiresome for manipulating data though)

  3. The advice for acquiring the data sounds good. Pulling the data in as chuncks for parsing and placing into preallocated arrays will keep things memory efficient.

    The problem is that 100 million points is always going to cause you issues having it in memory at once. You also will find if you try and write this to a graph this requires a separate copy of the data so this is going to cause issues again.

    I think you are going to have to buffer to disk to achieve this. You can do it as a database but I would be just as tempted to just put it to a binary file when you have a simple array. You can then easily access specific sets of elements from the binary file (you cannot do this easily with a text file) very efficiently. I think for the graph you are going to have to determine the best way to deal with this. You are probably going to have to decimate the data into it and then allow people to load more detail of the specific area of interest to minimise the data in memory at any given time.

    You'll end up writing a shed-load of code that realises your own bespoke pseudo database/file format that's not quite as good and fighting memory constraints everywhere

    Much easier just to do this:

    • Like 1
  4. hi, I want to read and process large data(nearly 100 million rows,the file size over 500MB), the original format of the file is .txt, through rename as .dat we get the binary file. The attachment is my vi. When the file is larger than 1 million lines(the data is singal Column), there is a wrong with "the memory is full".I want to read the data and plot a graph in the time domain, on the graph I can see the detail by zoom tools; and then do some anlysis about FFT and Statistics .I don't know how to do decimation in chunks .another thing ,maybe the memory release is also important. can you help me? thanks!

    the data from a dynamic strain test,the sampling rate is 10K/s。if we can get the whole result use a little data decimation from a chunk ,when we zoom some detail, such as in one chunk, we can get the whole data on the graph without decimation?thanks!

    The easiest (and most memory efficient) solution is to pre-process the file and put the data in a database and then use queries to decimate.

    Take a look at the "SQLite_Data Logging Example.vi" contained in the SQLite API for LabVIEW. It does exactly what you describe but with real-time acquisition..

  5. If you can give me a 1d array, and the dimensions to calculate the offsets, it will be easy to recursively convert to nested JSON arrays inside of JSON arrays (and it will work for any number of dimensions).

    Done.

    BTW> One other issue that occurs to me is “ragged” arrays, arrays of other arrays of varying size. In LabVIEW, I would make a ragged array from an array of clusters containing arrays; in JSON, it would be just an array of arrays of varying length. I would probably try and add support for conversion between these types, too, if we’re going to support multi-D arrays.

    Agreed.. All LV arrays are "square" arrays, however this is inefficient for large data-sets. We could actually handle non-square arrays by using arrays of DVRs internally (again, the premise being that we would only convert to "square" when necessary on extraction).......just a thought!

  6. It looks like it can be done with a little OpenG gymnastics (though not trivial). But this is making more work for you if you want to avoid an OpenG dependancy. How do you want to proceed with that?

    Well. Decoding an N dim array is not that hard (a 1d array of values with offsets). But I'm not sure how you envisage representing it internally .

  7. I had not considered multi-dimentional arrays. JSON doesn’t have a multi-dimentional array type, but we could just have arrays of arrays. I will look into supporting it. Thanks.

    N dim arrays don't have a "type" but they are represented

    e.g. Array:[[1,2,3,4,5],[1,2,3,4,5]]

    This causes us a problem in the way we convert to type from a variant (recursion works against us due to the fact that it is not a recursive function. it is iterative). If we were just dealing with strings, then it would be fairly straight forward.

    For example. Using the "Set From Variant" in a for loop works fine for 2D arrays. But for 3D arrays it will give incorrect results (try with the "Example Creation of JSON string.vi"-bug).

    One way forward is to detect the number of dims and have a different "Set JSON Array.vi" for each (max 4 dims?). But this is ugly (POOP to the rescue?).

  8. I don't really have an issue on this (however some direct attribution/link would be nice).

    However it brings up a strange angle in the whole NI.COM license discussion:

    The code is licensed under the BSD (while I don't specify it, attribution should be given).

    The ni.com license says 'Upload it and it belongs to us' (Shaun will love that wording)

    So uploading / sharing the code under the BSD license on the internet is OK, UNLESS you post it to ni.com. (other sites might be banned as well).

    Concluding the user is not breaking the BSD license, (if he attributes), but is breaking the ni.com EULA.

    Ton

    I could argue many aspects of that. The windows API is a much stricter license however (Creative Commons Non-Commercial Share-Alike).

    More importantly. In the case of the code that's taken from the windows API, an attempt has obviously been made to pass it off as original work (plagiarism in academic circles) otherwise, why change the icons, the distribution format, remove the attribution notices in the revision info and license file. That's just "not cricket" and should annoy you in terms of your code as much as it does me. I'm not saying that the person that posted it on NI.com did this (although they have opened themselves to the issue). They could have innocently picked it up from somewhere else. It does however highlight the importance of licensing and the validation. if not commercial gain, that some seek from the efforts of others.

    Maybe a MOD can move these comments to the licensing thread so as not to gum up this support thread?

  9. Do you see the same thing if you just put a linear sequence of numbers into the sine primitive?

    I ran the updater on that machine (had to drag it over to the offices to get an internet connection), updated everything and the problem seems to have gone away. It solved the problem, but I can't tell you which patch/update/package was the one that fixed it and I can't now replicate the problem-just needed to get it working.

  10. There doesn't seem to be a problem in LabVIEW 2012 (lvanlys.dll 12.0.0.3) or LabVIEW 2011 (lvanlys.dll 11.0.1.2), checked for both 32-bit and 64-bit. This is on a machine with only those versions installed.

    Yup. I think lvanlys.dll is installed with the Device Drivers CD (the machine also has 2012, but the lvanlys.dll seems to be from 2010). That would also explain why it affects all versions.

    Just downloading the latest Device Drivers installer to see if that cures it.

  11. I've been troubleshooting some code to find out why I was getting corruption in waveforms when using the Pt-by-Pt VIs.

    It seems that it it a problem with the latest lvanlys.dll in the run-time since older installations do not exhibit the problem. It affects all labview versions (from 2009 onwards) and both 32 bit and 64 bit.

    This is the result from an installation using lvanlys.dll version 9.1.0.428

    And here from an installation using lvanlys.dll version 10.0.1.428

    The problem is that the output is switching sign at arbitrary intervals as can be seen in the following table:

  12. Formula nodes are for c and matlab programmers that can't get their head around LabVIEW (or don't want to learn it).

    It's well known that it is a lot slower than native LV and it's a bit like the "Sequence Frame" in that it is generally avoided. I would guess there are optimisations that LabVIEW is unable to do if using the formula node which are reliant on the G language semantics (in-placeness?).

  13. Once somebody understands the "ideal" way to implement something--and more importantly why it is "ideal"--then they are in a better position to make decisions on when it is okay to use less-than-ideal implementations. But to get to that level of understanding requires somebody pointing out the flaws and consequences of the implementation they're using. IMO why is the most important question humanity has ever asked.

    That's what training courses and seminars are for. Forums (to me) are for the "I've got this problem with this code,, anyone know how to fix it?" questions and general, unstructured "discussions". Therefore I don't see them as an ideal or even particularly good platform for "training". Most people IMHO post and want/need an answer in a day or so. Understanding architectures takes longer than that and doesn't solve their immediate problem.

    So using your analogy. They already have the bloody stump. I usually give them a band-aid and tell them how not to lose the other one ;)

  14. It also requires that you know the type ahead of time to drop the correct terminal. Which of course is the point of the "run time" preservation in the suggestion (yes I know there's nothing "run time" about it, but we are drawing an analogy).

    You also need to know the type ahead of time with the others as well (supply a control to define the type). I would prefer it just coerces to the type of the indicator that I hang off of it which in fact is more useful than the "To Data" and James would get his function without having to define the type input. It (i think) should behave like a polymorphic VI but we don't have to write all the cases.

    Until that happens. I'm still using strings and variants (to me) are still the feature that never was..

    Can I also reiterate my long standing peeve about not being able to create "native" polymorphic controls/indicators. X controls is another "half" solution. :D

  15. Users posting on a forum tend to have been trying for a while to get round a specific problem or lack of understanding. Muddying the waters with architecture (when not asked for) tends to just confuse and frustrate. I'm reminded of what a teacher once said to me for exam technique. Answer the the question, not what you think they are asking.

    I tend to solve the immediate problem (usually with an example) then suggest improvements. However. Rolf has a huge amount of experience in umpteen programming languages as well as solving comms problems on a day-to-day basis so his one-liner is second nature.

    It helps when they post an example as then you can tell their level of expertise. If they have roughly hacked an example shipped with LabVIEW and it "sort of works". They may have spent all day trying to get the last bit sorted. This generally means that they if you start spouting about OOP and Actor frameworks, then they will probably just give up as "LabVIEW is too hard to do simple things".

    • Like 2
  16. That just creates a single point of failure. Something I cannot do in this system as the cost of it failing is expensive. I can live with one server going down or one client, but not something central to everything.

    Depends how many dispatchers you have. If you have a single dispatcher (centralised server like a DNS server) then yes. If you have a dispatcher on every machine (like apache), then no-as long as you have reduntant processes elsewhere. The usual topology I use is to have a machine with, say, 5 processes and replicate that for failover. If the machine goes down then you just point to the other machine(s). That's the way the web works ;)

    But haven't we discussed this before?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.