-
Posts
4,914 -
Joined
-
Days Won
301
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
I tried to replicate your result for x32 but couldn't.. Mine is still linear.
-
Waveform Graph Time scale Display -> Is this a Bug?
ShaunR replied to Pandiarajan's topic in LabVIEW General
Your graphs' X axis property are set to "Loose Fit". Loose Fit Property Short Name: LooseFit Requires: Base Package Class: ColorGraphScale Properties If TRUE, LabVIEW rounds the end markers to a multiple of the increment used for the scale. -
Indeed. It was an oversight. It should have been -1. I don't think an IPE is really the way forward as I don't see any performace difference between 0 and -1 (KISS). I consider it as a Labview limitation rather than the API, In theory they should behave identically regardless of the implementation specifics. Differences between compiling in different IDEs is a little disconcerting since I think we all assume that what works in one will work identically in the other. But it looks like one of those "not a bug. not desired" effects. But good call. on finding a probable explanation (your C experience obviously shining through). I think it will be rare occasions that anyone will be querying that many records at a time and it is still an order of magnitude faster than other DB implementations (like Access). You never know, they might optimise it in LV 2011 2015.
-
Version 1.2.1 just released. Upgrading to 1..2.1 is highly recommended to address an issue with bulk inserts on LV x32.
-
I'll release the next version a little earlier than planned (later today) since it will eradicate this (well spotted). Funnily enough. It only seems to happen on LVx32. x64 is fine. The next release passes an array of bytes to the bind function, which is faster than passing a string even with the conversion to a U8 array. It also removes the aforementioned "bug". The API already supports reading strings containing \00 (since V1.1). The field just needs to be declared as a blob. I did agonise about making it generic (just involves a direct replacement of "Fetch column" with "Read Blob"), but decided the performance advantage of not using the generic method outweighed the fact that you just have to define a field type. Well. I don't think that is the issue, since the later tests should have reduced the allocation to a smaller difference and I would have expected the x32 to be more like the x64 - which it isn't. Sufficed to say, there is a difference and, that LV x64 is vastly less efficient at building large arrays of strings than x32 (which I find surprising).
-
I think you are describing a **char. When I iterate over the rows and columns, I only retrieve a "C String" type (*char), which I then build into a 2D array. The Labview CLN automagically dereferences this to a labview string (i.e it adds the length bytes and truncates at \00). In this sense, it is a pointer to an array of bytes rather than an array of pointers.
-
What version of Labview and operating system are you using?
-
That doesn't make a lot of sense to me. Surely pointers are just references to where the data is stored rather than being stored as part of the data. But I ran the tests again to make sure. This time inserting 500 chars rather than the <10 as before. Everything else is the same apart from taking an average of 5 to cut down the test time. Pretty much the same. There must be a difference between the memory managers and the way x64 manages allocation. Surprising really. I would expect LVx64 running on a x64 windows platform to outperform a x32 app.
-
Definitely the former. If I pre-allocate the array and replace elements rather than auto index, then they perform exactly the same. But what I'm confused by is why there should be a difference between x32 and x64. After all, it should be the same amount of memory being (re)allocated and it is a LV internal implementation.
-
I'm guessing you were a C++ programmer in an earlier life (I also suspect you haven't been using LV since 1998 as your profile suggests). Labview passes data. Not pointers, objects or lemons (unless you specifically tell it too and even then its just smoke and mirrors). All functions in LV are designed to operate on data. It is a data-centric,data-flow paradigm. Moreover it is a "strictly typed", data-centric, data-flow language. When you connect two VI's together, you are passing a value, not an object. When you use a reference, you are also passing a value, however, the property nodes know (by the type) that they need to "look up" and de-reference in order to obtain the data. They are,if you like, a "special" case of data rather than the "norm" and require special functions (property and method nodes) to operate on. If you were to inspect the value of a reference, you would find a a pointer sized Int. However, if you looked at the memory location you would not find your control or indicator or even its data..
-
Sure.Here are a couple I have have used.
-
OK. I'm fairly happy with the performance of the API (there are to be a couple more minor tweaks but nothing drastic). So I started to look at SQLites performance. In particular I was interested in how SQLite copes with various numbers of records and whether there is deterioration in performance with increasing numbers of records. Wish I hadn't Below is a graph of inserts and select queries for 1 to 1,000,000 records. The test machine is a Core 2 duo running Win 7 x64 using Labview 9 SP1 x32. Each data point is an average over 10 bulk inserts using the "Speed Example.vi". The database file was also deleted before each insert to ensure fragmentation and/or tree searching were not affecting the results. I think you can see that both inserts and select times are fairly linear in relation to the number. And (IMHO) 5 seconds to read or write a million records (consisting of 2 columns) is pretty nippy Now the same machine (exactly the same test harness) but using LV2009 SP1 x64 Hmmm. It's interesting to note. that up until about 100,000; x64 it performs similarly to x32. However, memory usage reported by the windows task manager above 200,000+ shows x64 starts to climb further. Typically by the end of the test x32 has consumed about 450MB whilst x64 is about 850MB when viewed in the windows task manager. Checking SQLites internal memory allocation using the "Memory.VI" yields an identical usage between both tests. However,. LV x64 seems to be using 2x windows memory. I'm tempted to hypothesise that it is memory allocation in LV x64 which is the cause. Can anyone else reproduce this result? A single check at (say) 500,000 should be sufficient.
-
LV 2010 Icon Editor bug affects sub vi performance
ShaunR replied to Daklu's topic in LabVIEW General
Reproduced also in LV2009 x32 & x64 (PDS) as well as LV2009 x64 & x32 (PDS) I also noticed that if you run it before saving (i.e. have modified VI's in memory because I switched from LV x32 to x64) you get the same results. If you then "Save all" it runs ok. However, after invoking the Icon Editor, "Save All" or a re-compile has no effect. -
You smooth talker
-
You need to pass the "reference" of the control to the sub VI. By wiring the graph control you are only passing the "data" that the graph contains,
-
http://www.metacafe.com/watch/1154898/how_to_make_a_tinfoil_hat/ I wonder what colour the sky is on her planet
-
That's fantastic Just goes to show. "There are no bad programs, only bad programmers". I think I'll set that up as my wallpaper....move over Grace Park
-
Take a look at the speed example.
-
You do not need to explicitly open or create a file with the any of the high level API (the exception being "Query by ref") as it will open or create a file if one doesn't exist. Just specify the fie name. You cannot write directly to the file using standard file write functions. A SQLite file has a complex structure.
-
Color Change with value in tank... or something...
ShaunR replied to 335_x's topic in LabVIEW General
I don't think I'm that bad -
Be careful whose toes you step on today because they might be connected to the foot that kicks your ass tomorrow!
-
Color Change with value in tank... or something...
ShaunR replied to 335_x's topic in LabVIEW General
In the "Dialog & User interface" palette the is a "Colorbox constant". You can wire that to it and choose a colour by clicking on it which will show the colour chooser dialogue. -
Sweet
-
There's probably a better mathematical solution, but this works as a practical approximation for this sort of thing (certainly for repeatability at least). As Yair said. Take loads of readings. Then calculate the mean and variance of your data. Plug those value into the probably density function for a truncated normal distribution and solve of X (note that b in this case is infinity). It is flawed in that it assumes your data is normally distributed (which it isn't its 1/2 of a normal distribution). But it will give a much better approximation.