Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. Any hierarchy. For example, I just created this new class in an empty project. Unless I'm mistaken, it should show this class descending from the LabVIEW Object class. Instead...
  3. Yesterday
  4. I have never noticed any memory issue, and I wouldn't expect memory as a potential issue with customized controls (the issue that can affect controls is an excessive redraw rate that shows up as a higher CPU load, not memory).
  5. Hi, I made an application in Labview 2013 SP1 using silver controls palette. Then, I replaced with the beautiful controls of Flatline library. Now, my UI looks really nice :). But, when I look up the memory usage of my new “flat” app version, I noted that it increases (near the double) of my originally “no flat” appl memory usage. After a while of running the app, the memory usage is reduced to the original value . Have you noted this extra memory issue? Can be related with the flat library? Should I check an extra option on the app's bulit option? Thank you very much
  6. Last week
  7. A) LabVIEW will hold onto memory once allocated until the system says it is running low or until we have a large enough block to give back to the OS. Task Manager tells you nothing about how much memory LabVIEW is using, only how much it has available to use. B) Every wire can maintain an allocation of the data if the compiler determines that keeping the buffers inflated gives us the best performance. You can force a a subVI to release those allocations at the end of every execution by calling the Request Deallocation primitive, but I REALLY DO NOT recommend this. You’ll kill performance and possibly not save anything as people usually discover their app immediately re-requests the freed memory. Unless you are getting out of memory errors or are a real pro at data flow memory optimization, I recommend that you let LabVIEW do its job to manage memory. Beating LabVIEW and OS memory management generally requires bursty data flow with infrequent but predictable data spikes, where you can teach LabVIEW something about the spikes.
  8. That’s a new one. Is this any hierarchy or a specific hierarchy?
  9. The swap value does not seem to get rid of the problem. The second data instance is still in memory. (When I call the VI several times in a row the number copies grows less with the swap, but that's still not good enough.) If I use the Request Deallocation VI, that seems to get rid of the copies when the VI finishes, but I don't know if that's a good design. And the second copy is anyway in the memory during that the VI runs, which I would prefer to avoid. I already looked at the TDMS with external DVR but I don't know if/how I can use that. I don't seem to be allowed to wire a normal DVR to the VI.
  10. in theory i think you can use the 'swap values' primitive between the dvr read and dvr write to swap the specific handle over into the dvr, but that may be wrong. tdms also has an e-dvr option: https://zone.ni.com/reference/en-XX/help/371361R-01/glang/tdms_advanced_functions_dataref/ that should have almost no memory overhead.
  11. You can find out more and sign up here: https://forums.jki.net/topic/3082-vipm-2020-beta-sign-up-is-now-open/ here’s a teaser (below). Lots more exciting stuff to come...
  12. VIPM 2020 Beta is now available - sign up here: https://forums.jki.net/topic/3082-vipm-2020-beta-sign-up-is-now-open/
  13. Hi I am reading big TDMS files (~1GB) and I try to minimize memory usage. I read a signal once and I want to keep it for later usage, so I tried to write it to a DVR in the hope that I have only one instance of the data in memory. However the task manager tells me that uses 2 times more memory than it should. I think the problem is that the array is not deallocated although I don't need it after writing it to the DVR. Is there a solution to make sure I don't have copies of data?
  14. Has anyone ever seen this? I've tried renaming my Labview.ini file, but no change. It was working fine earlier today, but somehow it appears my Class Hierarchy view has broken. Any class I try to view comes up with an empty window (or sometimes a box with thin borders) and if I hit the Actual Size or Fit to Window buttons then I end up with this.
  15. [Update: NI Bug 974336] There seems to be a bug in the coercion of data to variant when a cluster contains a single element that is a variant. (original post here). Note: This bug appears to be very old, going as far back as LV2012. This has been reported to NI in the LV2020 Beta forum. I don't have a Bug ID / CAR yet. Coerce to Variant Fail (LV2019).vi Note that adding another element to the outer cluster causes the problem to go away.
  16. Sure. I did not expect the same speed, I was just hoping that I would still be able to do such a retrieval fast enough for it to still feel near instantaneous to the users. That would allow for it to be maybe 10x slower than the chunk-based binary file format. Your library is the only one I have found that also runs on RT, and it is also, I see now, the fastest one, just 20x slower than my current solution. I might just need to adjust to that though, to enjoy the many other benefits 🙂
  17. I don't think you can match the raw pull-from-file performance of something like TDMS (unfragmented, of course). SQLite advantages is its high capability, and if you aren't using those advantages then then it will not compare well.
  18. Probably not. LabVIEW gets out of the way of the running front panel for you to have full control of your app, but there's never been a reason for us to do that for a block diagram. I doubt there's a way to achieve what you're trying to do in LV as it stands.
  19. Ok. As an alternative, is there a way to get the tab key to function as a tool selector on a block diagram that's in a SubPanel of a running VI? (It seems that the VI that owns the SubPanel is swallowing the keystrokes).
  20. Managed to find an old copy of LVS's sqlte_api-V4.1.0 now, which I suspect is what I used when evaluating SQLite previously. Stripped down to just the query I see that it is actually 3x slower than your solution @drjdpowell for this query(!). I also tried SAPHIR's Gdatabase with similar slow results. I also did a test in python as a reference, and it ticked in at 0.44 seconds. So no reason to expect better after all...
  21. Attached is an example VI and db-file where I just use a single sql execute to retrieve the data...On my Windows 10 computer (SSD) with LabVIEW 2018 a full retrieval, with or without a time clause, takes about 0,66 seconds this way. I have tried using the select many rows template, and it takes about the same time. Retrieval test.zip
  22. Whether I use the timestamp as the primary key or not does not affect much. A select * from the table will also spend about as much time as one with an equivalent where clause on the timestamp.... 99% of the time is spent in the loop running the get column dbl node (retrieving one cell at a time is normally something I would think would be inefficient in itself, but perhaps not, I guess that is dictated by the underlying API anyway, or?).
  23. I've now incorporated flarn's code into a tool that will set splitter size. See here: https://forums.ni.com/t5/Quick-Drop-Enthusiasts/Pane-Relief/gpm-p/4014004/highlight/true#M1130 Just search "PaneRelief" on VIPM to download!
  24. First of all thank you again for your quick response. And sorry for the delay, but I was out and I could not check it until today. I took some time to try to use the same bitness in all DLLs and LabVIEW as you suggested but it did not worked. I had the same error in all the trials I did. Moreover, I was surprised because I repeated the same experiment than the explained in my previous post with the two extension functions DLLs (extension-functions.dll and extension-functions_64.dll, which I suppose that are the files for 32 and 64 files respectively) and I get no errors when trying to load any of these files. These are the sentences employed: SELECT load_extension('D:\Program Files (x86)\National Instruments\LabVIEW 2019\vi.lib\drjdpowell\SQLite Library\SQL Connection\extension-functions.dll'); SELECT load_extension('D:\Program Files (x86)\National Instruments\LabVIEW 2019\vi.lib\drjdpowell\SQLite Library\SQL Connection\extension-functions_64.dll'); For these reasons I think there is another problem that is not allowing me to load the SpatiaLite DLL files. Thank you in advance for your help! Víctor
  25. First question is what is your table's Primary Key? I would assume it would be your Timestamp, but if not, the lookup of a small time range will require a full table scan, rather than a much quicker search. Have you put a probe on your prepared statement? The included custom probe runs "explain query plan" on the statement and displays the analysis. What does this probe show?
  26. I was thinking of replacing a proprietary chunk based time series log file format that we use with SQLite, using this library. From a test I did some time ago I thought read performance would not be an issue (unlike for TDMS for example, which performs really badly due to the fragmentation you get when logging large non-continuous time series ), but now that I redid the tests I see that the read times get quite high (writes are less of an issue in this case as we normally just write smaller batches of data once a minute). I expected the database to be much slower, but the difference is of a magnitude that could be troublesome (no longer a real-time response). To give a simple example; if I have a table of 350 000 rows containing a time stamp (real) and 3 columns of measurements (real), and I select a time (sub)period from this table; it takes up to 0,6 seconds to retrieve the measurements on my test machine (using either a single SQL Execute 2D dbl, or the Select many rows template). For comparison the current binary file solution can locate and read the same sub-periods with a worst case of 0,03 seconds. 20x times faster. This difference would not be an issue if the accumulated time per batch of request was still sub-second, but the end users might combine several requests, which means the total waiting time goes from less than a second ("instantaneous") to 5-30 seconds... Perhaps I was using @ShaunR 's library back when I was able to get this done "instantaneously", but that seems to no longer exist (?), and in this case I need a solution that will run on Linux RT too...Should it be possible to read something like this any faster, and/or do anyone have any suggestions on how to improve the performance? The data will have to be on disk, the speedup cannot be based on it being in cache / memory...
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.