-
Posts
4,881 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
www.ni.info is available
-
extracting information about specific array element
ShaunR replied to liakofan's topic in LabVIEW General
You're right. In my quick scan of the examples I assumed that identifying which ones had changed was also a requirement. If it's only the number then it should count them OK. -
You can use the "Order BY" in the where clause. e.g Note that you have to have a where statement of some description so you can use RowID>0 to return all rows in a table if you have no "Where" constraint.
-
It beats the cr@p out of Labview for web services But my response was to the question "what other languages do you use".
-
extracting information about specific array element
ShaunR replied to liakofan's topic in LabVIEW General
Because his example only updates and checks when there is a change in data. Your version (without the event case) continuously evaluates the difference between the current and previous values so you miss the change. By the time you press the stop button, the previous and current values are the same (although different from the starting values) therefore you don't detect a difference. -
Not any more
-
IC. Yes I tried that and didn't notice much of a change, but then again 3% on 40ms is only about 1ms which is in the noise level. I would have expected more, But it seems DLL call overhead is virtually non existent when set to a subroutine. IF I ever write a manual. I will If you don't use execution systems and/or priorities, then you are limited to 4 threads (+1 for the UI). I don't think most people worry about it (LV is very good t making things appear to be very multi-threaded), but with very IO oriented asynchronous designs,it improves performance immensely (if used correctly). ThreadConfg.vi is my favourite VI I'll keep it in mind for now.
-
Nope. We've wanted the ability for years(along with control creation at run-time). We got xnodes instead (a compromise).
-
Delphi, PHP, and (when I'm dragged kicking and screaming like a 5 yr old girl being chased by a great white shark ) C/C++
-
Thats about 3% which is hardly worth the effort (although I'm not sure what you mean by modifying.....in-lining?) If I in-line the VIs I get exactly the same performance for insertions as you. But slightly slower on select (only gaining about 5ms in the 10,000 test) That won't happen. I use a high priority but in a different execution system and therefore force LV to run the queries in a different thread from the users application (assuming the user isn't using the same execution system of course). It basically forces a high priority thread rather than VIs which should mean it gets a higher priority on the scheduler. On my machine I always run with the maximum number of threads (~200) since a lot of my systems use asynchronous tasks at various priorities. This is the way things like VISA work Although I did notice it is set to "Standard" and should be set to "Other 1" (not quite sure how that got changed). I did look at it. But found that I needed to check the error every dll call and extract the string if need be. So I went for passing the error code up the chain and converting it at the end. Indeed. Missed that one.
-
You can use the "Toggle" or "Change to Control" properties for this. http://www.screencast.com/users/Phallanx/folders/Jing/media/6d33f01c-4961-4c88-9944-e3290ab349a0
-
Excellent. So my "improvements" are indeed improvements. I think we are getting to the stage where implementation is becoming the major difference. Obviously my version uses a much deeper nesting since I prefer a modular decomposition (and have abstracted further) as opposed to (say) putting many DLL calls in one VI - which would save the sub-vi overheads. But for the sake of performance vs maintenance that is acceptable (to me at least). I would also expect you to be squeezing a little more by using the 2010 in-lining feature (if you are not, then you should be) which is unavailable in 2009. But I take note of your suggestion to promote the get_column_count which (in theory) should skim a ms or two of the time (does't seem to make any noticeable difference on my machine though).
-
Yes and no I like the encapsulation so that If I decide to expose the "Bind Execute" into the Low Level palette, then the user would not have to worry about it. I'm not sure (now) if bindings are persistent or not across opening and closing the DB (I originally thought it was persistent) as it is not stated anywhere and when I checked I couldn't discern any difference in performance. So just erring on the side of caution really. When I run out of things to do, I might look at it again (if I remember ). Have you noticed any improvement in performance between this version (1.2.1) and version 1.1?
-
An empty array is slightly different. Since I use self indexing for loops, the bind never gets executed. However. I've just noticed that the classic problem also occurs. The sql ref and the DB ref are passed as "0" to the Bind Clear meaning they never get freed. Whilst his is obviously undesirable, I would have expected SQLite to return a "Misuse" error. It doesn't Instead error 1097 occurs which isn't good (possible crash under the right conditions). I will put some defensive code around this So inadvertently you have uncovered a bug, although not the original one
-
I thought about this a while back. Labview has no concept of "NULL" string. We can neither create it nor check for it. If we put a string control / constant down, we can only have an empty string (and any of our VIs that accept a string will have a string control). So the choice becomes do we allow a write to a NOT NULL field to always succeed (which is what will happen with your suggestion) or do we define an empty string in LV as being the equivalent to a NULL string. I think the latter is more useful. Good job I cannot take back the rep point eh?
-
I get error code 19 with your 1st snippet Abort due to contraint violation SQLite_Error.vi:2>SQLite_Bind Execute.vi>SQLite_Insert Table.vi:2>Untitled 1 If the field is not declared with NOT NULL it succeeds.
-
Its fairly consistent. Here's up to 5 million.
-
I tried to replicate your result for x32 but couldn't.. Mine is still linear.
-
Waveform Graph Time scale Display -> Is this a Bug?
ShaunR replied to Pandiarajan's topic in LabVIEW General
Your graphs' X axis property are set to "Loose Fit". Loose Fit Property Short Name: LooseFit Requires: Base Package Class: ColorGraphScale Properties If TRUE, LabVIEW rounds the end markers to a multiple of the increment used for the scale. -
Indeed. It was an oversight. It should have been -1. I don't think an IPE is really the way forward as I don't see any performace difference between 0 and -1 (KISS). I consider it as a Labview limitation rather than the API, In theory they should behave identically regardless of the implementation specifics. Differences between compiling in different IDEs is a little disconcerting since I think we all assume that what works in one will work identically in the other. But it looks like one of those "not a bug. not desired" effects. But good call. on finding a probable explanation (your C experience obviously shining through). I think it will be rare occasions that anyone will be querying that many records at a time and it is still an order of magnitude faster than other DB implementations (like Access). You never know, they might optimise it in LV 2011 2015.
-
Version 1.2.1 just released. Upgrading to 1..2.1 is highly recommended to address an issue with bulk inserts on LV x32.
-
I'll release the next version a little earlier than planned (later today) since it will eradicate this (well spotted). Funnily enough. It only seems to happen on LVx32. x64 is fine. The next release passes an array of bytes to the bind function, which is faster than passing a string even with the conversion to a U8 array. It also removes the aforementioned "bug". The API already supports reading strings containing \00 (since V1.1). The field just needs to be declared as a blob. I did agonise about making it generic (just involves a direct replacement of "Fetch column" with "Read Blob"), but decided the performance advantage of not using the generic method outweighed the fact that you just have to define a field type. Well. I don't think that is the issue, since the later tests should have reduced the allocation to a smaller difference and I would have expected the x32 to be more like the x64 - which it isn't. Sufficed to say, there is a difference and, that LV x64 is vastly less efficient at building large arrays of strings than x32 (which I find surprising).
-
I think you are describing a **char. When I iterate over the rows and columns, I only retrieve a "C String" type (*char), which I then build into a 2D array. The Labview CLN automagically dereferences this to a labview string (i.e it adds the length bytes and truncates at \00). In this sense, it is a pointer to an array of bytes rather than an array of pointers.
-
What version of Labview and operating system are you using?
-
That doesn't make a lot of sense to me. Surely pointers are just references to where the data is stored rather than being stored as part of the data. But I ran the tests again to make sure. This time inserting 500 chars rather than the <10 as before. Everything else is the same apart from taking an average of 5 to cut down the test time. Pretty much the same. There must be a difference between the memory managers and the way x64 manages allocation. Surprising really. I would expect LVx64 running on a x64 windows platform to outperform a x32 app.