-
Posts
4,883 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
If it's good enough for Apple, Google and now Microsoft ..........
-
If the user makes a mistake it is an error (on the users part). If it he needs to interpret a 20 GB log file to guess why, then that's an uncaught error with no recovery (on the programmers part). I always say, errors are for programmers only because users just want software that works-preferably with one button "Start". As a user of your software I do''t want to be "trained" how to interpret your logs of gibberish or wade through reams of irrelevance to find out what to wiggle. I want to know whats wrong and what to wiggle so it works. Well. Changing a background colour doesn't require pre-defined controls on the front panel and is supported by all controls/indicators. I'm not sure how you used an image but I would imagine it needed an image control next to each unless it was a list/tree or something. You can get (and save) lists of the controls and limits (from your database ) and have a generic check/set that just iterates over the visible controls and sets the colour and limits. This enables you to also tell the user why the value they entered is incorrect if it's outside limits and what is acceptable to enter. See the "Panel Settings Example" to see how this might work. The cluster? For me, no. That's used also for sequencing.
-
Without getting into error strategies; I haven't used the default LabVIEW handler in real applications since, probably, version 7. Yes I do use it for examples and quick checks of the error wire for debugging but I want a dialogue that is uniform with the rest of the look of the application for real apps. So I have a customisable 3 button dialogue that has images, on-the-fly translation, can query the database and has a timeout. It is used for errors, about boxes and confirmations - pretty much all dialogues - so the interface is consistent. I'm not a fan of the status bar for errors just as I'm not a fan of the status bar for mouse-over help. A one liner is not enough for users. They really need a plain [insert your language here] error message and an explanation of how to proceed. From a personal perspective, It just confuses me between something that's nice to know (which I associate with status bars so don't look often) and something I must know. For settings, I am a fan of the browser style of flagging errors. I.e change the background colour of the item and a message saying "sort that out!"
-
installation error LabVIEW 2014 64-bit on windows 10
ShaunR replied to kuroko's topic in LabVIEW General
Indeed. I think it makes sense for the other products since 32/64 "bit software" is the LabVIEW IDE bitness. So both 32 and 64 bit LabVIEW are supported. -
Suitable way to support a multi-channel user interface?
ShaunR replied to RnDMonkey's topic in User Interface
I know of quite a few people that use the MDI toolkit for things like this. If your VI are self contained then it is incredibly easy just launching panels. I don't normally advocate MDI for devices due to possible resource conflicts but it may be worth looking at for your use case. -
Indeed. But in LabVIEW, pure (CS) programmers are scarce and applied programmers are many which is why I pointed out the electrician/decorator example. 'Parallel processes' is actually worse since it has a well defined meaning in terms of the operating system and labview VIs run under the executable process. I see similar misuses of "threads". Anyway. Just food for thought.
-
I think your framework has come far enough now that you need to drop all this "actor" terminology. You now have specific modes of messaging and operations such as services, listeners, controllers and processes which are all merged under the banner "actor" - the same way everything is a "VI", even LVPOOP "methods". This is a similar scenario to the electrician/decorator problem and a switch of view will help understanding, adoption and epiphanies. I tend to think of actors as the micro and services et. al. as the macro. Your framework is superior to the Actor Framework so you should no longer ride on its terminology coat-tails and the OOP definition is just a universal catch-all for chunks of code (turtles all the way down). Calling all your use case solutions "actors" is just confusing and hiding the underlying application realisation and therein lies the power of your framework.
-
You are right. I've just noticed the semi-colons. I thought it was a JSON export format of STIL that was posted.
-
If manipulating, rather than creating; I would take a look at the new SQLite features.to see if that would be a solution. You could import a STIL files' JSON representation directly. You can then query (or update) it's parameters as if they were part of the database. This would link in extremely well with the rest of a test system so you could pull out test results and the STIL parameters for a particular configuration or date/time with transparent SQL queries.
-
Privacy policy?
-
Build Number
ShaunR replied to Neil Pate's topic in Application Builder, Installers and code distribution
IC. It was this I was referring to -
Build Number
ShaunR replied to Neil Pate's topic in Application Builder, Installers and code distribution
The command line is never a solution on windows. That's a Linux fetish. Look for the DLLs. Most cross platform dynamic libraries have a libversion function call and there is bound to be one for Git. -
I'm not sure what bit you read that said memory mapped IO removes concurrency. That doesn't seem right. Turn PRAGMA synchronous=FULL and turn off "Write Cache" in device manager for the worst case but the behaviour you describe sounds more like LabVIEW memory allocation than OS disk caching. I haven't seen this specific behaviour but it was a few years ago that I used SQLite with anything other than a SSD or USB memory stick. Anyway. It's all academic if the meeting-monkeys deign another course.
-
You have all the easy bases covered (you are aware that not all of those are sticky and have to be used when opening, not just when tables are created?). At this point I usually look for a more suitable alternative for the use case. TDMS would be far superior for streaming the data but I expect you have hit a brick wall there too with the decimation, making the idea of doing it in a query attractive by its sheer simplicity. If SQLite is the "closest" of all the considered options then you would have to really dive in and get your hands dirty. I'm pretty sure you are already close enough that you could probably get there eventually but it's a whole domains worth of knowledge in and of itself. If we ignore physical constraints like disks, then there is a whole raft of low level configurations of how SQLite operates so it would be a case of investigating forcing manual optimisation of query plans memory-mapped IO or even writing your own "decimation" function or extension to name just a couple. Have you tried the SQLite mailing lists?
-
If you are running in SYNC=FULL (the default) then SQLite is using Write-through and Windows buffering is bypassed since it breaks ACID. This makes a big difference on mechanical drives-not so much on SSDs.You can tweak more performance by not writing a journal (JOURNAL=OFF) and setting SYNC=OFF at the expense of catastrophic failure integrity.
-
That's interesting but not surprising. I might add some more benchmarks around this to those for rows and bulk inserts. It would be a useful metric to see what the performance overhead is for varying Indexes. 20K/sec bulk INSERT is easily achievable. I'm not sure if you missed a zero off of that but 20K x 27 cols is about 100ms for me.
-
There is something not quite right there. The file size should make no difference to the INSERT performance. This is inserting 100K records with 28 columns. Inserting 244 times increases the file size from 0 to 12GB.(I just disabled the drop, create and select in the Speed Example). There is jitter due to other things happening but it is not increasing as the file grows.
-
UNION and JOIN are two different things (JOIN is an alias for "LEFT JOIN" - you can have other types). A JOIN maps columns from one table to another for indirection. A UNION just appends data. The union is used in the WITH RECURSIVE so as to create an ordered queue which bestows the tree walking behaviour- it's a fortuitous slight of hand. How many columns? Benchmarking 100K x 3 columns (xyz) runs at about 250ms using my benchmark. Are you saving to a huge single table as if it were a spreadsheet? I get that INSERT rate (1.2 secs) at about 25 columns.
-
Most eval boards or programmers use a virtual (serial) com port. If the software they supplied lets you choose com1,2 etc then you can use the LabVIEW serial VIs to talk to it (if you know the commands)
-
Now you're talking.
-
Ah. Yes. But you can read it out in any order you like by just by using the ORDER BY clause. That's the beauty of DBs. The "View" isn't defined by the data structure,
-
It's an unusual use case and I wouldn't recommend a DB for this since there is a lot of overhead for realising a relational DB that you just don't need. However. I would suggest you UPDATE rather than DELETE. You wouldn't clear a memory location before writing a new value to it in a ring buffer. You'd just overwrite because it is more efficient. DELETE is an extremely expensive operation compared to UPDATE as well as more esoteric things like fragmentation (Vacuum resolves this but can take a very long time) Thinking about what you are doing a bit more. You are not using a ring buffer, are you? You have a fixed length FIFO. What you probably want is INSERT OR UPDATE that isn't directly supported by SQLite, but can be emulated. The easy one would be INSERT with the REPLACE conflict condition but I think that just does a delete then insert so performance wise, you are no better off. The implementation is easier than messing with triggers, though.
-
Ahh. I get it. Yes that would be a useful optimisation for this scenario. The Time,WN might not be unique but if that's not an issue I can see it simplifies things greatly. It's taking advantage of the hash table lookup under the hood.. I can think of a few more uses for that too. I wonder what the performance difference is between that and a value table lookup like, say, LevelDB.