-
Posts
1,973 -
Joined
-
Last visited
-
Days Won
178
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by drjdpowell
-
I have always done the second method, and just accepted the extra overhead of one extra message pass. Beware of premature optimization. It is generally rare for me to want to just forward a message like this without the forwarding actor needing to change or react to the message in some way. An alternate design is to accept that you don't have an actor-subactor relationship, but that your subactor should really be a helper loop of the actor. A dedicated helper loop can share references no problem. Your "actors sharing references" is a potentially suboptimal mix of "actors are highly independent but follow restrictive rules" and "helper loops have no restrictions but are completely coupled to their owner"
-
I have never noticed any memory issue, and I wouldn't expect memory as a potential issue with customized controls (the issue that can affect controls is an excessive redraw rate that shows up as a higher CPU load, not memory).
-
I don't think you can match the raw pull-from-file performance of something like TDMS (unfragmented, of course). SQLite advantages is its high capability, and if you aren't using those advantages then then it will not compare well.
-
Can you post an example?
-
First question is what is your table's Primary Key? I would assume it would be your Timestamp, but if not, the lookup of a small time range will require a full table scan, rather than a much quicker search. Have you put a probe on your prepared statement? The included custom probe runs "explain query plan" on the statement and displays the analysis. What does this probe show?
-
My app failed due to Queues created but not being closed under a specific condition. Memory use was trivial, and logged app memory did not increase, but at 55 days the app hit the hard limit of a million open Queues and stopped creating new Queues.
-
I have an app that uses a watchdog built into the motherboard. Failure to tickle the watchdog will trigger a full reboot, with the app automatically restarting and continuing without human intervention. In addition, failure to get data will also trigger restart as a recovery strategy. Still failed at 55 days due to an issue that prevented a Modbus Client to connect and actual get the data from the app. That issue would have been cleared up with an automatic reboot, but detection of that was not considered.
-
Even that way is hard, as you have to detect the problem to trigger a restart, and it is hard to come up with a foolproof detection method of all potential failure modes.
-
No, unfortunately, as it uses VIMs that are only availablee in 2017+
-
Are you using matching bitness? You have to use the same bitness, 32 versus 64, for all DLLs and LabVIEW.
-
I don't tend to use them unless they have meaning beyond the message itself (ie. they are a natural grouping of data, rather than something that just exists for the message). Most of my messages contain only one piece of data, so no typedef needed (unless that data is naturally a typedef). Also, it is possible to go more than just a typedef: have an API of subVIs to send and receive the messages in a common library. This can be a lot more powerful than just a typedef. An example would be having any message starting with "Config:..." being passed to a Config subVI, with multiple possible messages being handled by that subVI ("Config: Get as JSON","Config: Set from JSON", etc.). Another option is to send an Object that has multiple methods that the receiving actor can use. I view a typedef as a poor cost-benefit trade-off, since you have the coupling issue without the maximum possible benefits.
-
Plan? Yes, but not a priority. Note, though, that JSONtext returns substrings, meaning you can implement filtering in LabVIEW without making data copies, so the following code implements your example: More effort, of course, but it should be fast, and using LabVIEW is very flexible, and more debug-able.
-
Unfortunately this contrasts with the current behaviour that null-->NaN for a floating-point number, rather than being the default number input. In standard JSON, the float values NaN, Infinity and -Infinity have to become null, and to convert them back to a default value doesn't make sense. We could add an option to "ignore null items" which would treat nulls as equivalent to that item not existing.
-
A possibly better option is to replace your string/integer values with JSON strings, which is LabVIEW strings with <JSON> at the start of the name. So "<JSON>Comment" rather than "Comment". JSONtext will happily parse out the value into your cluster as JSON, then when you actually need your value you can convert it to string/integer, with appropriate handling if it is null or throws an error. This method, by the way, is also how to handle JSON that has variable structure.
-
That one is trickier. What is null supposed to mean, interpreted as an integer? Unlike for Floats, there is no Not-an-Integer value, nor is there an obvious "null" value like an empty string. Should one just use zero as null? Or should one consider the null to be that same as if the item was not present at all in the JSON, those using your default value in the supplied cluster. Why is whatever generated this JSON providing null values at all? Especially in place of strings and integers?
-
The error is coming from where you attempt to convert null (as in "comment":null) to a string. Intuitively, one would think that a null should be equivalent to an empty string (just as when converted to a number, null becomes NaN and does not throw an error). Thank you for the report.
-
Why the "Preallocated"; are you trying to have a firm limit of 12 clones due to the long-term running? With "Shared Clones" this stuff has been implemented by numerous frameworks, but I have not seen a fixed-size pool of clones done. I would implement such a thing with my existing Messenger-Library "actors", but with an additional "active clone count" to keep things to 12.
-
Post a simple vi showing the error, please.
-
First thing I would try is to follow the "Useful Hint" given in the spatialite link and try and load the extension via the sqlite3 command-line tool to see if it works there or gives a more informative error message.
-
I don't use any messenger-library-specific way. Usually I kill a reference like a notifier, but lately I have been using the new "channels" feature, which have a "last value" shutdown Boolean. So far, channels have produced cleaner code.
-
Sorry, I've been lax in updating the Tools Network and the LAVA-CR. I have updated the LAVA CR to the 1.11.1 version.
-
Just a warning: this bug is the prime suspect in an application failing after 55 days continuous use. LabVIEW has a limit of one million Queues alive at any time, after which "Obtain Queue" throws an Error 2: out of memory. This happens after 55 days if one actor shuts down (without its Caller shutting down) every 5 or so seconds. Unfortunately the application was built two days before this bug was reported. Please upgrade to the latest Messenger Library version.
-
Sadly, I still have no experience with PPLs, and you'll have to ask someone else. I have no idea about palettes.
-
Here's a couple of diagrams. You seem to be doing this: Where the privately-namespaced copy of Messenger Library in your ppl cannot communicate. But I'm suggesting this: Where every component uses a common ppl.
-
Why?