Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. Also, if you're slave process should be continuous except when handling messages, do you utilise a timeout method, or do you separate the behaviours within the slave by adding another layer via a separate message handler?

    There are two ways I've dealt with "continuous process" slaves in the past: Heartbeats and DVRs.

    I had a similar issue recently that I tried a different way with. A fourth option to consider with Timeouts, Heartbeats and DVRs (never thought of the last one).

    I was writing software to log to an SQLite database; each individual SQLite transaction to disk takes a large amount of time, so it is best to store up log messages and save them as a batch periodically. I solved it with a “Scheduled Tasks” VI shown below (in a background process in the “Command Pattern” OOP style):

    post-18176-0-43925600-1334692777_thumb.p

    “Scheduled Tasks” is called after each message and outputs a timeout that feeds back into the dequeue. Internally, “Scheduled Tasks” checks to see if it is time to write the accumulated log messages to disk, and if not, calculates the remaining milliseconds, which is output. Thus, the task always gets done on time, regardless of how many messages are incoming. A disadvantage is that the timeout calculation has to be done after each message, but it isn’t a big calculation. An advantage is that “Scheduled Tasks” outputs −1 (no timeout) after it flushes all waiting messages to disk; thus if log messages arrive very rarely, this loop is spending most of its time just waiting.

    It worked out quite well in this application, so I thought I’d mention it.

    — James

  2. On a side note, does anyone know how one might implement this using LabVIEW? It would allow creation of new SQL functions (such as ones that can handle LabVIEW timestamps). It requires the passing of a function pointer to the SQLite dll function. I have no real experience in such things, but I would have thought one could make a VI into a dll, and somehow pass a pointer to it to SQLite. But Google says no.

  3. Yet another Timestamp issue: SQLite3’s datetime function (which can be handily used in an SQL statement as "datetime(now)”) is in the format YYYY-MM-DD HH:MM:SS and is UTC, but the proper ISO8601 format is YYYY-MM-DDTHH:MM:SS.SSSZ. One can get this format in an SQL statement by using the longer "strftime('%Y-%m-%dT%H:%M:%fZ','NOW’)”.

    Currently, my thinking is to have “Bind Timestamp” which saves the full 128-bit LV Timestamp as a BLOB, and “Bind Timestamp (Text)” which saves the ISO8601 format as TEXT. I will have a single “Get Column Timestamp” that checks for the datatype and tries to convert accordingly (16-byte BLOBs or TEXT of the right format).

  4. Again don't support NaN's. (or use http://www.mail-arch...g/msg68928.html but then again how to handle +Inf, -Inf)

    That conversation is the one I came across, and the solution I chose. +Inf and -Inf store in SQLite3 with no problem. It is only NaN that is treated differently.

    Use ISO8601, thus save as text.

    I’m considering removing the “Bind Timestamp” and “Get Column Timestamp” methods entirely, thus forcing the User to explicitly decide on what to use as Timestamps. Possibly with some support VIs to convert LabVIEW Timestamps into ISO8601 text formats or the other two types suggested in the SQLite3 documentation: Julian day number as a DBL, or Unix Time as an integer. Other options (the number of possibilities is why I’m considering dropping Timestamps altogether) is LV timestamp as a DBL, or the full 128-bit LV timestamp as a BLOB.

    — James

  5. Serverless. It's just a library that you distribute with your application. No other processes, installers, etc.

    And that library is a mere 564 kB. Very light. Being so small and simple, it allows one to think of using a database solution for a wider array of problems. One thing that needs to be done is for someone to compile the SQLite source for Real Time targets. Any volunteers?

  6. 1) Non-issue really. If you see a LabVIEW string, you'll always have to check for null characters anyways to decide if you're going to bind text/blob. Unless you store all text as blobs, but then you need to throw collation out the window (I think?) and searching becomes interesting.

    Yes, the collation is a big reason not to just go with BLOB for all LV strings.

    2) U64s will store just fine as text, though searching might bet a bit weird. Keep in mind SQLite decides how to store something, not you. Even if you bind the string "123" as text, there's a good chance SQLite will store it as an I8 instead (though column affinities might come into play, not sure).

    I think searches would go wrong for U64 values too high to convert into an I64.

    If the user is requesting a DBL, do a type check: if you see a DBL, retrieve the data, if you see a null, return NaN.

    That’s what I did.

    4) For timestamps, I like the ISO8601 strings ("YYYY-MM-DD HH:MM:SS.SSS") values. They're easy to read, easy to parse, easy to generate.

    But they’re 23 bytes instead of 8. I can modify “Get Column Timestamp” to handle ISO8601 strings in addition to DBLs. And perhaps I could have two “Bind Timestamps”: “Bind Timestamp DBL” and “Bind Timestamp ISO8601”?

  7. If anyone has SQLite experience, can you comment on my choices for data type conversion between SQLite3 and LabVIEW? There isn't a clear one-to-one conversion between LabVIEW types and SQLite's dynamic typing system, so I ended up deciding to leave the choice of type up to the User. This has the disadvantage of requiring the user to understand the SQLite3 datatypes in addition to LV types, but it has the advantage of full control. The specific issues/choices I made are:

    1) SQLite3 has "TEXT" (UTF-8 encoded, zero-terminated strings) and "BLOB" (binary), while LabVIEW has strings used as either ANSI-encoded characters or binary (as in "Flatten to String"). This is a problem for any possible Variant-to-SQLite converter, as it is not possible to determine if a particular string is really character text or binary.

    2) SQLite3 "INTEGER" is variable size (1 to 8) bytes and can hold any LabVIEW integer type except U64. I use I64 as the corresponding LV type. Not sure what to do about U64.

    3) "REAL" is easy, as it is exactly the same as LabVIEW DBL. Except for one slight issue: "NaN" is not allowed by SQLite and is converted to "NULL", but "NULL" is retrieved by SQLite as zero! I opted to override this and return any NULLs as "Not a Number" if retrieved as a DBL.

    4) There is no timestamp data type in SQLite3. I added functions for saving LV Timestamps as REAL (DBL) values. However there are alternate possible choices for timestamps that would allow the use of inbuilt SQL functions.

    There is a “Get Column Variant” property that converts any SQLite value to a LV Variant (REAL—>DBL, INT—>I64, NULL—>Void,TEXT/BLOB—>String), but no function for binding a LV Variant, because of the above described difficulties.

    — James

  8. James, any chance of you sharing the source for your custom tool or contributing it to a new community project to develop an "Actor Manager" we can all use?

    My Actor Manager is very specific to my messaging framework, but I did a trial prototype for the 2012beta Actor Framework here.

  9. Perhaps with VITs, the block diagram as to be traversed to see if changes need to be made? I notice that when using a VIT, the block diagram is changed slightly, by having the reference updated to point to the new VI created rather than the original VIT. Checking the block diagram could take considerable time (215 ms on my XP-on-virtual-machine system). Making a clone, on the other hand, requires only a new data space.

    • Use LV 32-bit since 64-bit LV returns Error 12 for the SQLite DLL calls (I wonder if wrapping the SQLite exe rather than DLL would give better platform independence? just thinking aloud)

    Thanks, should have mentioned that. I took the precompiled win32 binary from the SQLite Downloads page. I specify the SQLite binary at only one point in the library, so it should be easy to substitute different compiled code for different operating systems using a single conditional disable structure.

    -- James

    • Like 1
  10. Haven’t studied your latest versions, but here’s something I whipped up quickly:

    SubView.zip

    It allows the launch of subViews where when the owning view goes idle it triggers a “Shutdown” User Event in the owned subView. The code in the subView is just a User Event control, while the code in the owning view is a single “Launch SubView” subVI with inputs for VI and sub panel refs:

    post-18176-0-10665000-1332841544.png

    Internally, “Launch SubView” crates a queue and passes this, along with the VI and sub panel refs to a dynamically launched “SubView Helper” (shown below). SubView helper creates the “Shutdown” User Event and calls the subView VI and puts it in the sub panel.It then waits for the queue created in “Launch SubView” to go invalid (which happens when the owning view goes idle) and then fires the Shutdown event.

    post-18176-0-26972500-1332841902.png

    This seems to work, shutting down all subViews when the owning view stops for any reason. I have one unresolved issue where the subViews remain reserved for execution while the owning View is still in memory. But the subViews do shut down.

    This is a pattern I call “autoshutdown slave”, where a dynamically launched process is tied to its launcher such that it will automatically shutdown if the launching VI hierarchy goes idle (stops) for any reason. The connection is made by a reference created in the launching hierarchy that goes invalid and throws an error in the launched process (a User Event going invalid doesn’t throw an error anywhere, so I had to use a queue instead, which makes this example more complex than otherwise).

    — James

  11. In the end, I think there is no way in the present version of LabVIEW to do what we want in an ideal fashion (regrettably). As far as I can see the only options require a compromise. I think we can either

    1) wrap the view VIs with something to shut them down (abort them) -- which is what I did in the code I attached and what my colleague did with the XControl;

    OR

    2) put a control to pass a user event on the front panel of each view VI. I actually started such a thing last week (making the control small and invisible and putting it at the origin), but I decided I didn't like it and abandoned it. Maybe this is the more palatable compromise, though. I'm going to revisit it now.

    I go with (2), myself (you can see the hidden “Startup Message” in my code image above). However, there is also a third option: using a temporary named queue to do the necessary information passing. Use the VI name (or clone name) as part of the queue name to ensure it is unique. This gets around the need for a hidden control.

  12. Hi AQ,

    I thought the concern about event order was due to the fact that an event structure pulls events from multiple queues: the static-registered events, and one (or more) dynamically registered event refnums. It’s possible to fire a few events into a dynamic queue before connecting it to an event structure, so it isn’t obvious how the structure determines in what order to pull off events from multiple queues. If it uses a timestamp, then what about near-simultaneous events with the same timestamp?

    — James

  13. If you have any communication method with the dynamically-launched VI (queues, notifier, etc.) you can create this reference in the calling VI. When the calling VI goes idle, this invalidates the comm reference, throwing an error (from dequeue) which can be used to gracefully shutdown. Alternately, one can programmatically release the reference. An advantage of this is that dynamically-launched processes will always shutdown gracefully, regardless of how the top-level VI exits (no orphaned processes left running in the background). I call these processes “autoshutdown slaves”.

    Unfortunately, one can only do this with User Events via polling (as an invalidated User Event doesn’t throw an error). I hide the polling in a background process that fires a second “shutdown” User Event, so at least it doesn’t look ugly.

    post-18176-0-77454400-1332001357.png

  14. Question 6: My files are constantly prompting for saving. It seems like most of these are changes from other files and not the file I changed. Since I don't know really know how this works I am not sure what exactly is going on. The end result though is that I want to minimize version changes (as tracked through SVN). I seem to remember reading somewhere that in LV2010 you can separate source code and compiled code.

    The changes are probably recompiles caused by subVI changes. Separating compiled code from source code should stop this. I’ve upgraded to 2011 recently and separated compiled code and it seems to work well so far.

    Question 3: If I am running multiple modules, how do I ensure that they have no namespace collisions? Should these be libraries as well? Do I only need a project if I want to deploy my code?

    I generally put all subVI’s in libraries libraries (or class libraries), other than test code.

    Question 2: If I create a vi and later decide to move it into the library, how do I accomplish this in Labview with SVN tracking. For example, if I create a vi in one of my modules and realize that it is fairly generic and would be better in a library, how do I move it to the library (ideally on disk and into the library file) so that both Labview and SVN are happy. Another situation might be moving a file from one module to another module, ideally I could move both library association and disk location.

    Personally, I move VIs such that the Project is happy and just let SVN consider the file deleted and created new in another place. Makes your SVN repository bigger, but this isn’t a big issue.

  15. I would say “no”, as generally one could choose to either use variant messages or a long list of specific-type messages. Array of Variants is kind of a mix of both.

    — James

    Aside: After our previous conversation, I actually modified my own message hierarchy by eliminating all simple-type messages in favor of VariantMessages, except where there was extra functionality involved (for example: ErrorMessage). And in actual use, I tend to either use completely generic messages (Variant) or create specific-purpose messages for specific uses. The latter can have multiple data elements and are usually used in the “Command Pattern” (i.e. they have “Execute" or “Do" methods rather than “Read”).

    Thank you for considering VariantMessage for LapDog, BTW. I have recently been doing consulting work where I can't use my own messaging package, so I’m interested in Lapdog being widely adopted and as flexible as possible.

  16. How do your B and C VIs communicate with the outside world? If I were doing something like this, each VI would have a queue (or similar) message receiving system and it would shutdown on receiving a “Shutdown” message or if it’s message queue becomes invalid. This makes full shutdown of everything quite easy.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.