Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. In the previous example, closing the VI reference immediately before or in parallel to the second Open VI Reference function creates a race condition Surely this is a "bug" that violates dataflow.
  2. Quite right. Poor terminology on my part. Done. Agreed.. All LV arrays are "rectangular" arrays, however this is inefficient for large data-sets. We could actually handle non-rectangular arrays by using arrays of DVRs internally (again, the premise being that we would only convert to "rectangular" when necessary on extraction).......just a thought!
  3. A local variable will be the fastest except for putting the indicator outside (and won't kick in that particular optimisation as long as you read it somewhere I think). The queues, however will have to reallocate memory as the data grows, so they are better if you want all the data, but a local or notifier would be preferable as they don't grow memory.,
  4. Yes. That's what you want, right? Fast? Also. LV has to task switch to the UI thread. UI components kill performance and humans cant see any useful information at those sorts of speeds anyway ( 10s of ms ) . If you really want to show some numbers whizzing around, use a notifier or local variable and update the UI in a separate loop every, say, 150ms.
  5. Some people have successfully back-saved to earlier versions of LabVIEW. There are certain features of the API that use methods that weren't available in older versions of labview but, if I remember correctly, there were only 2 or 3 of them (mainly using recursion) HDF5 is a file format. SQLite is a database. Whilst SQLite has it's own file format it has a lot of code to search, index and relate the data in the file. You will have to write all that stuff to manipulate the data contained in a HDF5 file yourself. Not sure what you are asking here. Can you make an exe? Yes. Do you need to add things to an installer? Yes-the sqlite binary. Yup. Looks really easy Now decimate and zoom in a couple of times with the x data in time (lets compare apples with apples rather than with pips) What I was getting at is that you end up writing search algos, buffers and look-up tables so that you can manipulate the data (not to mention all the debugging). Then you find its really slow (if you don't run out of memory), so you start caching and optimising. Databases already have these features (they are not just a file structure) , are quick and really easy to manipulate the data with efficient memory usage. Want the max/min value between two arbitrary points? Just a query string away rather than writing another module that chews up another shed-load of memory. Having said that. They are not a "magic bullet". But they are a great place to start for extremely large data sets rather than re-inventing the wheel, especially when there is an off-the-shelf solution already. (TDMS is a kind of database by the way and beats anything for streaming data. It gets a bit tiresome for manipulating data though)
  6. You'll end up writing a shed-load of code that realises your own bespoke pseudo database/file format that's not quite as good and fighting memory constraints everywhere Much easier just to do this:
  7. The easiest (and most memory efficient) solution is to pre-process the file and put the data in a database and then use queries to decimate. Take a look at the "SQLite_Data Logging Example.vi" contained in the SQLite API for LabVIEW. It does exactly what you describe but with real-time acquisition..
  8. Done. Agreed.. All LV arrays are "square" arrays, however this is inefficient for large data-sets. We could actually handle non-square arrays by using arrays of DVRs internally (again, the premise being that we would only convert to "square" when necessary on extraction).......just a thought!
  9. Move the indicators out of the for loops.
  10. Well. Decoding an N dim array is not that hard (a 1d array of values with offsets). But I'm not sure how you envisage representing it internally .
  11. N dim arrays don't have a "type" but they are represented e.g. Array:[[1,2,3,4,5],[1,2,3,4,5]] This causes us a problem in the way we convert to type from a variant (recursion works against us due to the fact that it is not a recursive function. it is iterative). If we were just dealing with strings, then it would be fairly straight forward. For example. Using the "Set From Variant" in a for loop works fine for 2D arrays. But for 3D arrays it will give incorrect results (try with the "Example Creation of JSON string.vi"-bug). One way forward is to detect the number of dims and have a different "Set JSON Array.vi" for each (max 4 dims?). But this is ugly (POOP to the rescue?).
  12. Without
  13. I could argue many aspects of that. The windows API is a much stricter license however (Creative Commons Non-Commercial Share-Alike). More importantly. In the case of the code that's taken from the windows API, an attempt has obviously been made to pass it off as original work (plagiarism in academic circles) otherwise, why change the icons, the distribution format, remove the attribution notices in the revision info and license file. That's just "not cricket" and should annoy you in terms of your code as much as it does me. I'm not saying that the person that posted it on NI.com did this (although they have opened themselves to the issue). They could have innocently picked it up from somewhere else. It does however highlight the importance of licensing and the validation. if not commercial gain, that some seek from the efforts of others. Maybe a MOD can move these comments to the licensing thread so as not to gum up this support thread?
  14. Well. in my defence. They were originally written in about 1997 before all the new fangled icon bitmap editing. When a customer complains about icons rather than bugs; maybe then I'll change them
  15. Indeed. He has been a busy ;little squirrel. This drives info is taken from my windows API (windows_api_Drives.llb).
  16. I ran the updater on that machine (had to drag it over to the offices to get an internet connection), updated everything and the problem seems to have gone away. It solved the problem, but I can't tell you which patch/update/package was the one that fixed it and I can't now replicate the problem-just needed to get it working.
  17. Yup. I think lvanlys.dll is installed with the Device Drivers CD (the machine also has 2012, but the lvanlys.dll seems to be from 2010). That would also explain why it affects all versions. Just downloading the latest Device Drivers installer to see if that cures it.
  18. I've been troubleshooting some code to find out why I was getting corruption in waveforms when using the Pt-by-Pt VIs. It seems that it it a problem with the latest lvanlys.dll in the run-time since older installations do not exhibit the problem. It affects all labview versions (from 2009 onwards) and both 32 bit and 64 bit. This is the result from an installation using lvanlys.dll version 9.1.0.428 And here from an installation using lvanlys.dll version 10.0.1.428 The problem is that the output is switching sign at arbitrary intervals as can be seen in the following table:
  19. Formula nodes are for c and matlab programmers that can't get their head around LabVIEW (or don't want to learn it). It's well known that it is a lot slower than native LV and it's a bit like the "Sequence Frame" in that it is generally avoided. I would guess there are optimisations that LabVIEW is unable to do if using the formula node which are reliant on the G language semantics (in-placeness?).
  20. You could always do it the easy way and just extract the bytes to determine type and link them to a case structure for decoding.
  21. Nope. Nothing wrong. The ""unflatten pixmap" strips the alpha-channel and coerces to 24 bit. I suggest you use the rather excellent "BitMan" package in the CR.
  22. Why are my runtime exe menus jibberish?
  23. Actually it is quite possible. Use Events and all zones register and filter for messages. This is directly analogous to what the CAN bus is doing.
  24. That's what training courses and seminars are for. Forums (to me) are for the "I've got this problem with this code,, anyone know how to fix it?" questions and general, unstructured "discussions". Therefore I don't see them as an ideal or even particularly good platform for "training". Most people IMHO post and want/need an answer in a day or so. Understanding architectures takes longer than that and doesn't solve their immediate problem. So using your analogy. They already have the bloody stump. I usually give them a band-aid and tell them how not to lose the other one
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.