-
Posts
1,969 -
Joined
-
Last visited
-
Days Won
172
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by drjdpowell
-
-
20 hours ago, smithd said:
The real reason I'm posting is just to bump and see how jsontext is coming. It looks like you're still pretty actively working on it on bitbucket...do you feel more confident about it, or would you still call it "VERY untested"? I'd love to try it out for real when you get closer to, shall we say a 'beta' release?
Also, from what I could tell there isn't a license file in the code. Are you planning on licensing it any differently from your other libraries, or you just never got around to putting in a file?
I’ll call this beta if you want to give it a try:
jdp_science_jsontext-0.2.0.9.vip
BSD license.
-
1
-
-
13 hours ago, smithd said:
Using them for current project, and they really de-labview the user interface
I find the biggest improvement with Flat design is in the simplest of front panels. Here's a dialog box:
Here I just used link-style buttons (the square around "Save as.." is placed by LabVIEW as it is bound to the Enter key). I eliminated the window tittle normally on the window frame, and instead placed that text in bold. This makes a very clear, uncluttered dialog box.
-
1
-
-
6 hours ago, Neko said:
Is there any way to change the ON color of a Dark-colour Button?
Do you know how to Customise controls?
To get mouse-hover effects, I based these buttons on the System controls, which do not support different colours for ON/OFF states. So I instead made the ON state a 4x4 pixel PNG (like the hover states). If you install Bitman from the LAVA-CR, you can use the utility I used to make these PNGs:
<LabVIEW>\vi.lib\drjdpowell\Flatline Controls\Utilities\Make small transparent square in PNG.vi
Set your desired colour and Alpha transparency (I use a slightly higher Alpha for the Hover states.
You can see the "pallet" of colours I used in: <LabVIEW>\vi.lib\drjdpowell\Flatline Controls\Button Pallet.vi
To actually use your new transparencies, I'm afraid you have to manually customise the control and swap in the new PNG for the old in the ON state. If you Google you can probably fine a tutorial. Once you learn it it's not that hard.
-
Here is a demo button based on Silver buttons (instead of system buttons). No hover effects, but one can colour ON and OFF state differently, which you can't with system buttons.
-
1
-
-
1 hour ago, ShaunR said:
SQlite is single writer, multiple readers as it does table level locking. If drjdpowell had followed through with the enumeration it would (or should) have been in there. The high level API in the SQLite API for LAbVIEW insulates you from the Error 5 hell and "busy" handling (when the DB is locked) that you encounter when trying simultaneous parallel writes with low level APIs. So simultaneous parallel writes is not appropriate usage.....sort of
.
Oh, that enumeration. Not sure that would be on the list as only one thing can be written to a file at one time, even by TDMS. SQLite is by default ACID compliant (now that would be on the list), but one can turn that off to get asynchronous disk writes, just like TDMS. And as long as you writes are faster than your busy-handler timeout (I set that at 5 seconds, adjustable) then there are no Busy(5) errors. The issue is just write speed, where TDMS wins.
-
1 hour ago, ShaunR said:
If drjdpowell had followed through with the enumeration it would (or should) have been in there.
Sorry, what “enumeration”?
-
1 hour ago, ShaunR said:
Only with Pragma Synchronous = OFF
No, seems to work fine without that, as long as one reading, not writing.
-
16 hours ago, Manudelavega said:
And since we are dealing with files on disk: SSD drive!! Forget about HDD, way too slow, your users will have to wait several seconds each time they need to refresh the graphs.
Actually, at least up to the 2GB files I’ve tested, the SQLite file gets held in memory by Windows File Cache, so refreshing is fast (though the initial read will be slower with an HDD than an SSD, as will be writing).
-
TDMS is much more specialised than the more general-purpose SQLite. If your use case is well within what it is designed for (and it sounds like it is) then it is likely the better choice.
-
You didn't make them from scratch; you used available components (such as lawnmowers, brakes, engines, etc.) all of which you accepted as not being sabotaged by someone who wanted to kill you. An MOT is a non-onerous test, involving a reasonable set of requirements, so that sounds like a good idea. It would be a LOT of work to pass an MOT without using components manufactured by other people.
-
1 hour ago, ShaunR said:
A) will definitely kill you wheras B) Probably won't since it is a design requirement not to kill you. If that isn't obvious then I despair
Or. Looking at it another way.
The goal of writing software is to succeed and realise the requirements. Would you prefer A) or B) to be successful and the contingencies employed to ensure success to drive towards A) or B)?
Nobody makes their own car because no-one is trying to sabotage their car, and driving a car built by yourself is extremely dangerous. If you think someone might try to kill you, then you still don’t build your own car. Instead, you verify that no-one has tampered with your car.
I was just wondering how that kind of analysis goes with software, and whether management, in it’s insistence on no open source or onerous verification requirements, is actually making the correct choice as far as minimizing risk.
-
But which is the bigger risk of this example:
A) Someone will sell me a car that has been modified to crash and kill me.
B) The car that I build from scratch will crash and kill me.
Mitigating (A) by accepting (B) is not necessarily reducing your chance of death.
-
10 hours ago, ShaunR said:
When I say "obvious". I mean in the same sense that the prospect of being beaten senseless on a Friday night is intuitively worse and riskier than tripping over and possibly hurting yourself even though I don't know the probabilities involved.
I’m not sure intuition is that reliable in such high-importance-but-low-likelyhood things. People die from mundane things like tripping over.
-
I wonder how large the risk of malicious code is relative to the risk of serious bugs in code implemented from scratch.
-
1
-
-
Anybody using Postgres who would like to beta test my libpq-based library? It’s similar to my SQLite access library.
-
1
-
-
You’re running out of memory because you are copying the giant string somewhere. If you never alter the string, but instead work along it using the “offset” inputs on the string functions, then you can use all the loops you like.
-
1
-
-
1 hour ago, JamesMc86 said:
Sometimes in config files I will use the key-value mode to read what items are in the object to help with defaults if missing or version migration which I guess isn't the intention of this API but that's the only case I have that this wouldn't work for.
Oh that IS a use case. Though lookup is much slower than with a Variant-Attribute-based object, it is much faster than doing the initial conversion to that object, so one is well ahead overall if one only needs to do a few lookups.
-
Here's an alpha version to look at, with one example, just to show the API:
jdp_science_jsontext-0.1.0.4.vip
This is VERY untested.
-
4 hours ago, smithd said:
My point here is that as part of the json generation step for me, I'm passing in a large binary string which has to be escaped and handled by the flatten to json function.
Be careful if you use the NI function to get your binary data back, as it has bug in that will truncate the string at the first zero, even though that zero is properly escaped as \u0000. Png files might or might not have zeros in them, but other binary things do (flattened LVOOP objects, for example).
-
11 hours ago, smithd said:
One thing I can say for sure is I've never needed the in-memory key-value features of the lava API. I just use the json stuff as an interchange, so all those objects only ever go in one function.
I have only used it that way a bit. And that was basically for recording attributes of a dataset as it passed through a chain of analysis. I was only adding things at a few places before the result got saved to disk/database. The new JSONtext library has Insert functions to add/change values. These are slower than the old library, but not so much as to make up for the expensive conversion to/from LVOOP objects, unless one is doing hundreds of inserts. If someone is using LAVA-JSON objects in such a way, I would like to know about it.
-
30 minutes ago, ShaunR said:
If you want bigger JSON streams then the bitcoin order books are usually a few MB
I've had a client hand me a 4GB JSON array, so I'm OK for large test cases.
-
22 minutes ago, ShaunR said:
300MB/sec?
Only 125MB/sec, but I was testing calling 'SELECT json_extract($json,$path)' which has the extra overhead of getting the JSON string in and out of the db. I wish I could match 300MB/sec in LabVIEW.
-
1
-
-
2 hours ago, ShaunR said:
I don't use any of them for this sort of thing. They introduced the JSON extension as a build option in SQLite so it just goes straight in (raw) to an SQLite database column and you can query the entries with SQL just as if it was a table. It's a far superior option (IMO) to anything in LabVIEW for retrieving including the native one.
I prototyped using an in-memory SQLite DB to do JSON operations, but I can get comparable speed by direct parsing. But using JSON support in a Database is a great option.
-
Some performance numbers:
I took the "Message on new transaction" JSON from the blockchain.info link that Shaun gave, created a cluster for it, and compared the latest LAVA-JSON-1.4.1**, inbuilt NI-JSON, and my new JSONtext stuff for converting JSON to a cluster.
- LAVA-JSON: 7.4 ms
- NI-JSON: 0.08 ms
- JSONtext: 0.6 ms
Then I added a large array of 10,000 numbers to bulk the array out by 50kB.
If I add the array to the Cluster I get these numbers:
- LAVA-JSON: 220 ms
- NI-JSON: 5.6 ms
- JSONtext: 9.0 ms (I pass the large array to NI-JSON internally, which is why I'm closer)
If I don't add the array to the cluster (say, I'm only interested in the metadata of a measurement):
- LAVA-JSON: 135 ms
- NI-JSON: 5.2 ms
- JSONtext: 1.1 ms
The NI tools appear to vet everything very carefully, even unused elements, while I do the minimal checking needed to parse past the large array (in fact, if I find all cluster elements before reaching the array, I just stop, meaning the time to convert is 0.6 ms, as if the array wasn't there).
**Note: earlier LAVA-JSON versions would be notably worse, especially for the large case.
Database Connectivity Toolkit Multi Row Insert
in Database and File IO
Posted
I’ve been working on a wrapper for libpq.dll, which theoretically could be made to work on Linux by just using a libpq.so file, but I have only tried Windows so far.