Jump to content

ShaunR

Members
  • Posts

    4,855
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Don't have a definitive answer. But some things perhaps you could look at.

    Under the App properties there is a "source Control" sub menu that look interesting. Looks like a hook into the LV source control stuff (read/write config looks interesting) . Maybe you could find a way to make Labview Mercurial aware!

    A bit of a kludge. But you could monitor the revision number and action when it changes. As far as I'm aware, it is only up-issued after a save rather than a compile.

  2. I too experienced " "You must enter a post" problem over the weekend (never before).

    I also experienced "Error 403 Access Denied" for quite a while (happened a couple of times) when trying to view any pages. Not sure if they are related. I just assumed someone was updating the site (nearly thought I had been banned biggrin.gif )

  3. On OSX, LV expects to interface a ".framework" file. It has to be 32 bits as LV on OSX is still 32 bits.

    The easiest way to produce a framework is to use XCode and select new project / carbon or cocoa framework.

    XCode will prepare a skeleton ready to be used.

    As SQLite is built-in OSX, you should use the provided implementation and just write a wrapper around it.

    It is located in /usr/lib/sqlite3. This wrapper is only a few lines of code I've done similar wrappers for other libs.

    Let me know if you'd like to have a copy of the XCode project and files.

    Note that LV does not unload DLL/framework from memory. You have to quit LV. If anyone found an alternative I'm very much interested!

    Chris

    Many thanks Chs.

    I too thought about using the pre-installed SQLite. But I'm not sure that I can guarantee that it will be 32 bit if the Mac starts in 64 bit mode. A wrapper is not a useful way forward as it requires upkeep of other code than Labview and would require a lot of work re-implementing an interface which is already there. A windows wrapper went this route and its limited because of this.

    From my limited reading of the "framework" implementation its a bit like a fat arch. So does that mean I can compile a 32 bit and 64 bit and include both in the framework? Does the pre-installed SQLite have both 32 bit and 64 bit and switch transparently? Do I really have to type in each and every function rather than select it from the LV combo-box? (And does that mean I select the ". framework" instead of the dylib?) There seems to be little information on the net for Mac developers.

    I will have a play with Xcode. I know its installed but not sure where to find it (doesn't show up under applications). I've only just learnt to use CodeBlocks so naively though it would just be a case of re-compiling for the Mac (its also available for the Mac) book.gif

  4. I've had a quick perusal of the Mac framework link you provided (many thanks).

    Sheesh! wacko.gif What a pain. It looked initially like the best way forward would be to link into the SQLite framework that is shipped with the Mac. But as LV for the Mac is 32 bit; you cannot guarantee that the SQLite will be 32 bit.

    It looks like Mac users are going to have to wait for me to complete the learning curve if there are no LV Mac gurus around to offer guidance (no response so far from my question in the Mac section). Or maybe it's a sign that it isn't that important (and the API is not that useful to the few Mac users there are wink.gif) and divert my attention to other things.

  5. I think guaranteeing no corruption is a rather subtle. On SQLite's How to corrupt your database. the first possible problem beyond the uncontrollable is the OS/Filesystem not fsyncing properly. The next is ext3 (a journaled file system) without the barrier=1 option (which to my understanding breaks sync with write caching). I think the problem (there could be more) is that writes to the hard drive can be reordered ( which I I think can happen when write caching and NCQ don't properly fsync, also an active sqlite database journal normally two files). You might be able to work it without a write cache on a journaled file system, or by modifify SQLite3 to use a transactional interface to the underlining filesystem (I think reiser supported something like that). Anyway I would suggest keeping sync full or at least normal,since it's easy for someone to pragma anyway the safety. The sqlite doc recommend normal for mac since on OSX it works around lying IDE controllers by resetting, which really slows things down.

    fsynch is only used on unix-like OSs (read linux, Mac). Under windows "FlushFileBuffers" is used. It also states at the end of the paragraph that

    "These are hardware and/or operating system bugs that SQLite is unable to defend against.

    And again in Things that can go wrong section 9.4 it states:

    "Corrupt data might also be introduced into an SQLite database by bugs in the operating system or disk controller; especially bugs triggered by a power failure. There is nothing SQLite can do to defend against these kinds of problems."

    Where locking is "broken" (multiple simultaneous writes causing corruption) it seems to be referring to network file systems. In this scenario the websire states:

    "You are advised to avoid using SQLite on a network filesystem in the first place"

    The main issue seems to be cantered around old consumer grade IDE drives. I remember a long time ago reports about something like this. I haven't, however, read any articles about SATA drives having similar problems (much moe prevalent nowadays). But synchronous mode seems to be an attempt to "wait" a few disk revs in the hope that the data in the cache is finally written to a drive if its still in the drives internal write cache. (Still. Not a guarantee). And I think probably not relevant with many modern hard-disks and OSs (windows at least). Additionally. Putting SQLite (as a single subsystem) through our risk assessment procedure reveals a very low risk.

    My view is that if the data is really that important, then classical techniques should also be employed (frequent back-ups, UPS, redundancy etc).

    The problem with committing on every query is that you can't compose multiple query's. Which is something I do since I want the database to always be consistent, and a lot of my operations involve reading something from the database and writing something that depends on what I just read,

    You can. You just compose them as a string and use the transaction Query". That is its purpose. Although in the "Speed example" it's only used for inserts. It can also be used for "Selects, updates, deletes etc".

    The API is split into 2 levels.

    1. The top level (polymorphic) VIs which are designed as "fire-and-forget", easy DB manipulation, that can be placed anywhere in an application as a single module.

    2. Low level VIs which have much of the commonly used functionality of SQLite to enable people to "roll-your-own". You can (for example) just open a DB and execute multiple inserts and queries before closing in exactly the same way as yours and other implementations do (this is what "query by ref" is for and is synonymous to the SQLite "exec" function)..

    Instead of reopening the file by default you could wrap all query's in a uniquely named savepoint that you rollback to at the end of the query. Then you have the same level of safety with composability, and you gain performance from not having to reopen the database file. The only trick is to have separate begin transaction and rollback vi's (since those can't be wrapped in save points), which I'd recommend as a utility functions anyway (those are among the main VI's in my library).

    In benchmarks I ran initially, there is little impact in opening and closing on each query (1-15 us). The predominant time is the collation of query results (for selects) and commit of inserts. But it gives the modularity and encapsulation I like (I don't like the open at the beginning of your app and close at the end methodology). But if that "floats-your-boat" you can still use the low level VIs instead.

    I did look at savepoints. But for the usage cases I foresee in most implementations; there is no difference between that and Begin / End transactions. OK you can nest Begin / End but why would you? Its on the road-map. But I haven't decided when it will be implemented. If yo can think of a "common" usage case then I will bring it higher up the list.

    I think I figured it, my work computer is winxp and has write cache enabled which breaks fsync to some extent (I think). Now I see a difference on my WIn7 home computer which uses supports write cache buffer flushing (a proper fsync). Now I may need to get an additional hard drive at work just to run my database on if I want to guarantee (as best as possible) no database corruption.

    See my comments above.

    I think the difference between a Blank DB and :memory: db is that the blank is a data base with sync=0 and journal_mode=memory (the pragma's don't indicate this though), while with :memory: all the data in the database is stored in memory (as opposed to just the journal).

    Indeed. And I would guess "Temp" tables are also in memory. I don't think there's much in it.

    • Like 1
  6. Hi,

    I'm gearing up to start development of our next two projects in LV2010 and noticed I'm unable to create properties and methods on XControls (either new ones or existing ones from earlier LV versions). I checked this on LV2010f2 32bit version as well as on LV2010 64bit version. See what happens in the vid linked below.

    http://screencast.com/t/IOBg5ir4y

    Same happens for method creation.. LV then creates a 'normal' VI instead of a method VI.

    I was unable to find any info or reports on this here and on the dark side..Maybe, I'm missing something here, but if this is a bug, it's quite a showstopper for me. We'll have to stay on 2009 then.

    Anyone else noticed this weird behaviour?

    Just save all. You will notice that it asks you to save the read/write vis (they have been created). Then the VIs will become visible.

  7. I'm with Ton on this one

    Your example screen shot is actually starting at an "amplitude" of 60, not 60°. 90° would start at the peak of the waveform (cosine) so to start at 60° it would be much further up.

    For the other question. Just use the "change sign" function on the data and start at 180-degrees or -degrees depending on that your trying to achieve..

  8. I prefer being certain the data can't get corrupted (the main reason I'm using SQLite). I'm not convinced having a journaled file system permits me to avoid syncs.

    It's more to do with data loss than corruption. Don't forget, its not turning off journalling in SQLite. Its just returning as soon as the OS has the info (and therefore is present in the OSs journal). The worst that can happen (I believe) is that during a crash, changes to SQLites journal aren't transferred to the OSs journal therefore some piece of data might not be written to disk when restarted. On restart, the OS will ensure that incomplete transactions (file system transactions) are resolved. And when SQlite is started, it will ensure incomplete SQL transations are resolved. Additionally. I open and close the file on every query which automagically causes commits on every query which (in my mind) is safer. But I have made it an option so it's up to the user to decide.

    With this benchmark, turning off sync makes very little difference (maybe 5 ms) on my system (are you using something exotic for a hard drive or filesystem?). If I test a large amount of transactions the difference is enormous.

    Nope.I'm using NTFS (write cacheing enabled). But something is different since (as you can see from the images) the insert time of the get_table is more in tune with inserts of my implementation when Synch is FULL (~200 ms). The only way I can get the same results as your benchmark is to use in-memory temp tables then I'm at the same insert times. What are the compilation options for your DLL?

    Just to note it turns out if you use a blank path the database performs like you had sync off (since it's temporary it doesn't need durability).

    So my insert time is really 74ms not 68ms (path now has a default value in my test code, since forget to set it the last time).

    Yes. This I'm not sure about. Since I can also find little difference between an in-memory DB and a "temporary" DB. It doesn't state it, but what could be happening is that the journal and temporary tables are created in-memory when the db name is blank giving rise to similar performance to an in-memory DB.

  9. Here's info on frameworks

    If I don't handle strings containing null mine takes 60ms to dump, and If I don't try to concat results from multiple statements I can get down to 50ms (I think I'll remove this feature, the only real use I can think of is checking pragma settings). So I'm just as fast at the same functionality (at least in LV2010). With handling strings containing null but not concatenating I'm at 57ms. Apparently SQLiteVIEW is handling select slightly better than me, since they're still faster. At least I'm winning the insert benchmark by a good margin:D.

    That's cheating laugh.gif Thats like me getting 60 ms on LV2009 biggrin.gif

    I want it all; The extra functionality and the speed (eventually)

    I haven't checked but the GetValueByPointer.xnode may call the UI thread, if it does it won't parallelise too well.

    If you're talking about parallelising in terms of for loops across multiple processors. Then there's not much in it. A good choice of non-subroutine execution systems and subroutines yields better results.

    I'm not too happy about using the xnode (not keen on Xnode technology in its current form anyway). I will probably knock up a more raw version using moveblock since I don't need the polymorphism and who knows what other stuff locked away inside.

    I surprised that the dump is that much faster in 2009, my best guess is that it the GetValueByPointer is different somehow (maybe the old one doesn't call the UI thread).

    How are you turning off synchronization, and which SQLite library get's down to an insert of 45?

    I'm just using the Pragma command to switch synch. I've made it so a simple change in a project conditional statement means you can switch between them all.

    I don't think synchronisation is necessary on already journeled systems (e.g NTFS, and ext3). I think its more appropriate for FAT32 and other less robust file systems.So the shipped setting on the next release will be OFF.

    Here's some results from my latest incarnation showing the effect of the turning off synch.. I've switched to testing by averaging over 100 iterations since there is a bit of jitter due to file access at these sorts of times. You'll probably notice the difference between the average insert time and insert time from the last iteration. With Synch OFF they are much more in agreement.

    Synch=FULL. LV 2009 x64 on Win 7 X64

    Synch=OFF. LV 2009 x64 on Win 7 X64

    Your "get_table", SYNCH=FULL LV2009 x64 on Win 7 x64

  10. I tend to update when only at the beginning of new projects AND only when a service pack has been released. But I don't see an update as being mutually exclusive to using previous versions.. Therefore I may have the latest version installed. Just not use it for production code. Quite often I will take a completed project and mass compile offline for the new version to see what issues pop-up (there are always some).

    I take it your not on an SSP? If not, then I would suggest pushing through for the budget and acquire a 2010 license. By the time it comes through, a service pack should be available (we're about due I think).

    So sorry wacko.gif I can't really answer your question fully since I've only "played" with 2010.I would suggest you obtain permission to install the evaluation of LV2010 on a machine and try compiling some known working projects so you can run them for a week or so to see what happens. That will shake the bushes enough for you to make your own mind up since we all use little tricks and workarounds that may or may not work between versions.

  11. I haven't tried porting mine to mac yet, but I figured out how to get the calls to work. If I remember all this right (don't have access to mac right now and it's been a couple of weeks) you need to create a "sqlite3.framework" folder and copy a sqlite3.dylib file into it (you can get one from the user/lib or extract it from firefox). You set the CLN in LabVIEW to point to the framework directory, and you have to type the functions names in (the function name ring didn't work for me).

    I hacked together (barely tested on LV2010f2 WinXP and not documented) a gettable implementation to see if there was any speed improvement. There wasn't much since get table is just a wrapper around all the other functions.

    GetTable

    Insert 141

    Dump 49

    Sweet. Nice work on that.

    I think perhaps blobs my be an issue with this as they are with straight text. But still, its definitely worth exploring further. You've shown me the way forward, so I will definitely be looking at this later.

    I'm not sure why you think its not much of an improvement. For select queries its. ~60% increase on mine and ~40% increase on yours (using your benchmark results) . I wouldn't expect much on inserts since no data is returned, therefore you don't have to iterate over the result rows.

    Incidentally. These VIs run faster on LV 2009 (for some reason). Your "get_table" example on LV64/32 2009 inserts at ~220ms and dumps at ~32 ms (averaged over 100 executions). On LV2010 I get roughly the same as you. Similarly, my 1.1 version inserts at 220 ms and dumps at 77 ms (averaged over 100 executions). Again. I get similar results to you in 2010. Of course. Dramatic insert improvement can be obtained by turning off synchronization. Then you are down to an insert speed of ~45ms.

    My next goal is to get it working on the Mac. I have a another release lined up and am just holding off so that I can include Mac also. So I will have a go with your suggestion, but it looks like a lot of work if you cannot simply select the lib. Do you know of a web-page (for newbies) that details the "Framework" directory details (e.g, why it needs to be like this. What its purpose is etc)?

  12. Hi all.

    Not really a "Labview" question....but related.

    I've "zero" experience with Macs so apart from a huge learning curve, I'm getting bogged down with multiple tool chains and a severe lack of understanding of MAC (which I think is BSD based).

    I've released an API which currently supports windows (x32 and x64) and has been reported to work with Linux x32. These work because I'm able to compile dll and .so for those targets. I'd really like to include Mac in the list of support but am having difficulty compiling a shared library (dynlib?) that Labviewwill accept.

    I have set up a Mac OSX Leopard 1.5 virtual machine with labview and Code:Blocks.It all runs (very slowly) but I'm able to compile a shared library in both x32 and x64 (Well. I think I can at least.. The gcc compiler is using -m32 for the 32 bit and won't compile if I have the wrong targets). The trial version of Labview I downloaded I know is 32 bit (got that from conditional disable structure) and I think the Mac is 64 bit.

    So I have the tool-chains set up and can produce outputs which I name with a "dynlib" extension. However. No matter what compiler options I try, whenever I try to load a successful build using the labview library function (i.e. select it in the dialogue) it always says it's "Not a valid library".

    Does anyone know what the build requirements (compiler options, architecture, switches etc) are for a Mac shared library? There are a plethora of them and I'm really in the dark as to what is required (i386?, Intel?, AMD? all of the above?, -m32?, BUILD_DLL?, shared?)

    Any help would be appreciated.

  13. I suggest you create a thread in the mac section to discuss this issue.

    On a side note, I think there are some mac maniacs on info-LabVIEW, dunno if you use it...

    I didn't even know LAVA had a Mac section oops.gif

    Thanks for that. I'll give it a whirl.

    Never mind; I was doing it wrong. Thanks for the snippet.

    I don't see that polymorphic Insert VI in the palettes. What am I doing wrong?

    post-7534-032862500 1288868129_thumb.jpg

    Your not doing anything wrong. You'll just have to wait for the next release. All that's happened is I've wrapped the original in a polymorphic VI since there is another insert which should make it easier to insert tables of data without the "clunky" addition of fields (which is fine for single row iterative inserts).

  14. The difference between a patch and a stability and performance release is largely in the integration between features and how deep into the corners we sweep. A patch fixes very targeted bugs, bugs that crash, bugs that have been specifically noted by a large number of customers, or bugs affecting high profile features which have no workarounds available. This release is going after a lot of non-critical issues.

    A fair comment. Although I do subscribe to the premiss that the corners are swept away on every release and a product up-issue is an extension of a rugged base. But then again, I'm more involved with mission critical software where even "minor" annoyances are unacceptable..

    Lets hope the "Tabs Panel" resizing is fixed finally biggrin.gif

  15. I finished up the last (hopefully) pass through my library a week or two ago. I got permission from my employer to post my SQLite library (I need to decide on a license and mention the funding used to create it). and when I went to try to figure out how to make a openg library (which I still haven't done yet) I saw the SQLiteVIEW library in VIPM. Which is similar to mine. But mine has a few advantages.

    Gives meaningful errors

    handles strings with /00 in them

    can store any type of of LabVIEW value (LVOOP classes, variant with attributes),

    can handle multistatement querys.

    has a caching system to help manage prepared querys.

    As for benchmarks I modified your speed example to use my code (using the string based api I added to mine since to the last comparison), and the SQLiteVIEW demo from VIPM. This is on LV2010f2 winxp

    Yours

    Insert 255 Dump 107

    This no longer available library that I've been using up until my new one (I'm surprised how fast this is, I think it's from having a wrapper dll handling the memory allocations instead of LabVIEW)

    Insert 158 Dump 45

    SQLiteVIEW

    Insert 153 Dump 43

    Mine

    Insert 67 Dump 73

    Splendid.

    The wrapper (if its the one I'm thinking of) uses sqlite_get_table. I wanted to use this to save all the fetches in labview, but it requires a char *** and I don't know how to reference that. The advantage of the wrapper is that you can easily use the "Helper" functions" (e.g exec, get_table) which would vastly improve performance. But I'm happy with those results for now. I'll review the performance in detail in a later release.

    Did you manage to compile SQLite for the Mac?

    I managed to compile sqlite on a virtual machine, but whatever I tried, labview always said it was an invalid library.wacko.gif Even the Mac library I downloaded from the sqlite site wouldn't load into labview.angry.gif I think it probably has something to do with the "bitness",

    Any Mac gurus out there?

  16. other SQLite tools.

    Such as? LV tools for SQLite are few and far between. hence the reason for publishing this API.

    If you look back, Matt W has done some benchmarking against his implementation. There is a "Speed" demo which means anyone can try it for themselves against their tools (post them here folks yes.gif).

    There are a few tweaks in the next release, but the improvements are nowhere near the same as between versions 1.0 and 1.1. Now its more to do with what hard disk you are using, what queries you are performing, whether you can tolerate less robust settings of the SQLite dll.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.