Jump to content

Matt W

Members
  • Posts

    63
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Matt W

  1. Your code has a memory leak. The reference created by OwningVI needs to be closed. I have a similar bit of code and it took me half a day to track down that memory leak. You can use the desktop execution trace to find hanging references (which didn't exist back then).
  2. I use sqlite_column_text (only different in how it handles zero length blobs), but the trick is to use MoveBlock. I use variant's since they hold type information, and it's NI job to make sure the it support all types (although there are a few quirks I've had to work around). I don't need to escape strings since I have bound variables, I can also use \00 in string parameters. Bound Variables are also the proper way to avoid SQL Injection attacks. Fortunately I qualify for a real "geek", not that I like C but I know it well enough. When I got home I got it cross compiling with LV64. Just involved putting pointer sized int in the right places. I'll try to get it to work on a mac at work on Monday (If I got it set right I shouldn't need to change anything). I also removed some restrictions from my interface, and removed the need for strlen when preparing strings (that required the use of DSNewPtr, MoveBlock and DSDisposePtr). Beyond sqlite the only calls I use are DSNewPtr, MoveBlock and DSDisposePtr, all of which are supplied by LabVIEW (so I would hope they are on RTWin). Anyway with LV2010 64 bit on Win7 with string and doubles(the same as your initial speed test), but modified to including formatting time. Yours Insert 401 Dump 72 My Current Version Insert 277 Dump 151 I don't know why your insert got slower and dump faster from the Win XP LV2010f2 machine 32bit I tested on before (the 64bit computer should be faster CPU and IO wise). I'm familiar with re-entrant vi's. From the testing I did, mine seems to handle concurrent access properly. I'm using SQLITE_OPEN_FULLMUTEX, which should reduce crashing from misusing the API. If you have an example that causes "database locked" errors I'll check if I'm handling it properly. You can use an In-Memory Database which should be faster than a SSD. With the above bench mark and an in-memory database I get Insert 219 Dump 153 So not a huge gain. But if I make every insert a transaction (and knock the test size down to 1000). Harddisk File Insert 71795 Dump 15 Memory Insert 24 Dump 16 So a SSD could make a large difference with a lot of transactions.
  3. I compared your newer version with your test harness. If I don't modify the test for fairness Yours Insert 251 Dump 112 Mine Insert 401 Dump 266 If I include the string construction in your benchmark and the variant construction in mine. Yours Insert 299 Dump 114 Mine Insert 504 Dump 257 If I change my select to use the while loop autoindexing like yours (smacks forehead), and fix some execution settings I messed up (and turn on inlining since I'm on LV2010 now). I get Insert 352 Dump 181 Considering that mine is handling type information and using varaints, I doubt that I could get mine much closer to yours speed wise. With conversion from strings and to over formats I'd probably do better in some cases (since I can handle blob data directly for instance, and don't need to escape strings that contain ' ). Some comments on the stuff I noticed poking around your code. In Fetch All you can put a step before the while loop, then put another step after the fetch. Then you wont have to shrink the Rows by one. Also, I'm not sure if this is true with subroutines, but "rows out" in "fetch record" being within a structure requires labview to cache that indicators value between runs. If you make the change to fetch all this won't matter. Your multistatement querys can't include strings with ';'. I worked around that by calling strlen from the standard c library on the pzTail returned from prepare statement (while being careful to make sure the string was required in later labview code). Then subtracting that length from the original length and taking a string subset to be the remaining statements. The proper solution would be to use a helper dll and do the pointer math, to avoid transverseing the string in strlen. But since I use prepared statements it doesn't affect my version much. And to be nit picky SQLite only has one L. As for my enviroment, my version currently only works on Win32 based systems. There's no reason I couldn't port it to other things though. I have a 64bit LabVIEW install at home. At work there's an old PXI 1002 box, but I've never used it (I haven't had a good excuse for playing with LabVIEW realtime).
  4. Select is slow because you're building the array one element at at a time, you should grow it exponentially and resize it back down. I wrote my own SQLite wrapper a while ago, I've been debating posting it after I finish up the documentation. Its interface is more difficult than yours,since it assumes knowledge of SQLite. But is built for speed and storing any kind of LabVIEW value object. Anyway for comparison, using the 10,000 insert code you posted, on my computer with LV2010. Yours Insert 1202ms select 6657ms Mine Insert 451 ms Select 251 ms My select uses the same basic logic as yours (at least for values stored as strings). The main difference is that I prealloc 16 rows (note: it doesn't have to be 16 that just seemed like a good starting number to me), use replace subset to fill the values in and if I need more space I double the array's size (I just concat it to itself), at the end I reshape the array back down. Changing the allocation should make yours faster than mine (since you aren't dealing with type information). My insert is faster because I'm using prepared statements and value binding. I'm also only opening the file once, which probably accounts for some of the difference. Matt W
  5. I believe the defaults can be set in the XControl's Init ability. Matt W
  6. The generated encompassing circle can be too big when three (or more) of the sub circles touch the edge of the true minimum circle, the one I posted will work with all cases when only two or one sub circles touch the true minimum circle. So while it'll always generate an encompassing circle in many cases it may not generate the minimum encompassing circle. Matt W QUOTE (george seifert @ Mar 11 2008, 07:46 AM)
  7. QUOTE(george seifert @ Mar 10 2008, 04:42 AM) [edit]Never mind this doesn't work[/edit] I really surprised noone else has figured this out. See attached for a VI that should solve your original miniimum radius problem. Matt W
  8. QUOTE(Graeme @ Jan 7 2008, 01:11 PM) You can also remove the flashing by defering panel updates, during the for loop. http://lavag.org/old_files/monthly_01_2008/post-7834-1199740584.png' target="_blank"> Matt W
  9. QUOTE(jlokanis @ Dec 17 2007, 03:17 PM) Thanks for posting the cause, you've saved me from a big headache. Since I made the same mistake. Matt W
  10. QUOTE(Lars-Göran @ Nov 14 2007, 01:06 PM) I hadn't thought of continuously generating internal triggers. I'm not sure if this would work but you could ignore the internal queue entirely, and send the internal triggers to the external queue. Personally I like the concept of iterations since it seems to makes timing data acquisition used by the statechart easier. I meant that each trigger would have it's own cluster (or none), and states that trigger off of that it would have access to that cluster. Much like NewVal in the event structure for a value change event, which only turns into a variant if there's multiple controls with incompatible types. Matt W
  11. QUOTE(Lars-Göran @ Nov 13 2007, 12:29 PM) You lose the use of the inputs and outputs of the statechart (which I've found useful). And there's an easier solution, just use a synchronous statechart with your own queue of data inputs and trigger. I posted a very trivial example on ni's forum. http://forums.ni.com/ni/board/message?boar...ssage.id=280719 This still requires a while loop, but if you embed it in a subvi you effectively have the same thing. I found the concept of iterations to be helpful once I started using the inputs and outputs of the statechart. This would be easier if trigger's had built in (and typed) way to pass parameter's with the trigger. Which is a feature that needs to added in my opinion (and can be added onto the current statechart models). Currently I can either use a variant (losing static type checking) or a large cluster (wasting memory). Matt W
  12. QUOTE(LV Punk @ Sep 7 2007, 05:04 AM) I'm not sure my customer (Uncle Sam) would commit to using this; but it looks good. Most TOE stuff requires a high-end server config (that I don't have). The dual-core (or quad, what the heck!) has been in my mind; I might be able to get my supplier to let me eval and check the performance difference. My main concern with it would be reliability (I heard the early drivers were flaky, I'm not sure if that's still the case,but only way to be sure is to test it). Also the standard (32 bit, 33 MHz) PCI bus only has 133 MB/s of bandwidth (gigabyte has a theoretical peak of 125 MB/s), so if you have anything other than a Gigabyte NIC on that bus you could run out of bandwidth. In older mother boards the hard drive controller would typically hang off the PCI bus, which would be limiting if that's the case on the current motherboard. Even a Slow 1.8 GHz Core 2 should be faster than a 3.0 GHz p4 in the majority of cases (including running with only one core). Let alone a 2.4 GHz quad core (the cpu only costs ~$280). But I doubt your motherboard supports it (even though it may have the right socket). You can get an idea of the speed difference here. http://www23.tomshardware.com/cpu.html
  13. QUOTE(LV Punk @ Sep 6 2007, 05:26 AM) In my mind the best option would be to upgrade the CPU, to a Core 2 based one (if possible), but since that doesn't seem to be an option now. I've never used one, and have no idea how well it would work (it's built for low latency gaming, so I don't know what it'll do in your use case), but a http://www.killernic.com/ goes beyond TOE. It basically runs a linux os on the network card that can offload a lot of the network processing (it could offload the firewall for instance in addition to the tcp stack). Also It's PCI and XP compatable.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.