Jump to content

drjdpowell

Members
  • Posts

    1,969
  • Joined

  • Last visited

  • Days Won

    172

Posts posted by drjdpowell

  1. As the code grew I incidentally came across this solution. Not great, and I should make it a typedef, but it's better that it was.

     

    Darn.  I actually had a DEV template modification that demonstrates some of this, but I removed it from the published package because I thought it was too niche.  See a screenshot below.  I also need to use a subVI to name the event registration.   You can also see a “Private Messenger†in the JKI Actor Template.

     

    Note, BTW, that I don’t use EventMessengers in the way you are: a different Messenger for each message.   Instead I have multiple messages coming in through one event case, with a case structure on the message label.   The only reason I sometimes have a second EventMessenger is to separate “Public†(or “Externalâ€) messages from calling code, from “Private†(or “Internalâ€) messages coming from my subActors.  So I can prevent calling code from being able to send me private messages, and I can limit what subActors can do (like not tell me to shutdown, for example).

     

    post-18176-0-90207100-1452092837.png

    • Like 1
  2. -To Json which produces a raw string

    -To json which produces a json.lvlib object, which the library then flattens in the usual way into a string.

    I'd prefer the second option myself...is that what you're going for?

    The second option.

     

    PS> I’m a bit stuck on this at the moment because of a problem with “Variant to Data†being too “dumb†when it comes to child classes.   If one has a Variant containing a Parent-class datatype holding a child-class object, and you attempt to use "Variant to Data" to cast this to a Child-class wire, it throws a type mismatch error, even though such a conversion is easily possible.  This is a problem when the library-user want to use clusters containing their own child classes.  There are a couple of work around but both are ugly.

    • Like 1
  3. Any thoughts on incorporating the JSON1 functionality included with the latest release of SQLite ( This new functionality is quite intriguing to me as it seems to allow SQLite to be a possible "best of both worlds" solution for relational and document oriented data management situations along the lines of what the latest version of PostgreSQL has.

    The JSON SQL functions should be working in this beta (I think it includes the 3.9.0 dll with those functions).   I’ve only played around with them a little, but they do work.   

  4. Note: I’m thinking of putting in an abstract parent class with “To JSON†and “From JSON†methods, but I’m not going to specify how exactly objects will be serialized.  So Charles can override with his implementation (without modifying JSON.lvlib) without restricting others from creating classes than can be generated from different JSON schema.   I could imagine use cases where one is generating objects from JSON that is created by a non-LabVIEW program that one cannot modify, for example.   Or one just wants a different format; here’s a list of objects from one of my projects:

    [
    ["SwitchRouteGroup",{"Connect":"COMMS1"}],
    ["DI Test",{"IO":"IO_ID_COMMS_1_PRESENT","ON=0":false,"Value":true}],
    ["Set DO",{"IO":"IO_ID_COMMS_1_EN","ON=0":true,"Readable?":true,"Value":false}],
    ["Wait Step",{"ms":2000}],
    ["DMM Read Voltage",{"High":2,"Low":-2}],
    ["Set DO",{"IO":"IO_ID_COMMS_1_EN","ON=0":true,"Readable?":true,"Value":true}],
    ["Wait Step",{"ms":500}],
    ["DMM Read Voltage",{"High":13,"Low":11}],
    ["AI Test",{"High":3600,"IO":"IO_ID_COMMS_1_V_SENSE","Low":3400}],
    ["SwitchRouteGroup",{"Disconnect":"COMMS1"}]
    ]
    • Like 1
  5. The drawback is that I need to modify JSON.lvlib, and I didn't take the time to update my fork to stay inline with the development of JSON library. 

    I could easily make a “JSON serializable†class with abstract “Data to JSON†and “JSON to Data†methods, and have the JSON-Variant tools use them.  Then you could inherit off this class and not have to modify JSON.lvlib.

  6. Do you see drawbacks?

    Problems show up in the unflattening part.  Flattening, the part you are trying, is strait forward, because you are going from strongly-typed LabVIEW to weakly-typed JSON.  The other way is harder.

     

    You don’t need to make your class a child of JSON Object; just have your class have “To JSON†and “From JSONâ€.  You won’t be able to use these objects in the JSON-Variant tools, but you will be able to handle things with the other methods.  This involves more coding, but is faster.  And it allows one to get away from the monster config cluster and towards a distributed config, where different code modules supply different parts of the config.

  7. what exactly do I have to do in order to dump my monster as json and viceversa? Create serializable.lvclas parent of all my other classes, perhaps, with two methods - doing exactly what? Is there an example?

     

    The current JSON-Variant tools will store objects as flattened strings.  Not human-readable, but flattened objects have a mutation history so you can freely rearrange things inside the class private data (but not on any condition re-namespace the class).  I find that mixed-mode quite useful and easy.

     

    Alternately, to serialize objects as JSON I usually just have methods in the class to convert to/from JSON and I use them as I build up the JSON explicitly (i.e. without using the shortcut of the JSON-Variant tools on a cluster).  

     

    There is a possibility that one could make one’s LVOOP classes children of “JSON Valueâ€, and then override “flatten†and “unflattenâ€.  Then your objects would serialize to JSON even inside a cluster using the JSON-Variant tools (those tools recognize “JSON Value†objects and flatten/unflatten them).  But there is a technical issue that I have to look into to make that work. [Never mind, this won’t work.]

     

    Also, suppose the monster still mutates, parameters being added or moved: what recommendations would you give in order to maximise the information which can still be retrieved from an out-of-sync configuration dump?

     

    Adding or removing is fine, but don’t rename or move things, unless you want to have a “schema version†attribute and mutation code than coverts from older schema.

  8. Ah, I see.  We need something like

     

    Error 402864, SQLITE_BUSY(5), occurred at redacted.lvlib:redacted.vi on "INSERT OR IGNORE INTO redacted (  redacted, redacted, redacted)  VALUES (?, ?, ?);" Possible reason(s): database is locked.  See https://www.sqlite.org/rescode.html#busy.

     

     

    Edit> can’t do the above, but how about:

     

    Error 402864 occurred at redacted.lvlib:redacted.vi on "INSERT OR IGNORE INTO redacted (  redacted, redacted, redacted)  VALUES (?, ?, ?);"

    Possible reason(s):
    SQLITE_BUSY(5) database is locked (see https://www.sqlite.org/rescode.html)

  9. When throwing an error from the API, could you add the SQLite error code to the error description? My client got error 402864 last night, and it took a lot of digging to figure out what that meant. The code doesn't resolve in the Explain Error dialog, and even if it did, I'm more interested in the code returned by the sqlite3 DLL because I use that code to search their documentation when troubleshooting.

     

    I add 402859 to the SQLite error code (apologies for that number, but those are the range of codes assigned to me by NI).   I’m currently calling sqlite2_errmsg() to get an error description from the SQLite dll itself.   And the message should have also contained either the database file, or SQL statement on which the error occurred.  Was this information not in the error?

  10. Can you add by default the function last insert rowid? 

    I always add this function in your API.

     

    https://www.sqlite.org/capi3ref.html#sqlite3_last_insert_rowid

    If you look in the latest beta version you’ll find it (though it’s not in the palettes, it is in the Connection class).  

     

    PS> Some notes:

     

    — one can also use the last_row_ID() SQL function

    — Be careful that you aren’t executing multiple INSERT statements in parallel on the same Connection, as you might mixup the rowIDs.

    — note the existence of WITHOUT RowID tables, which avoids the need to determine the auto assigned rowID.

    • Like 1
  11. Here is a beta version with changes aimed at performance, particularly for large arrays of data.  I managed to eliminate an O(N^2) performance issue with parsing large JSON strings.  I also improved the Variant tools’ speed with one- and two-dimensional arrays.   See the new example “Example of an Array of XY arrays.viâ€.

     

    lava_lib_json_api-1.4.1.33.vip

     

    Please give this a test and let me know of any issues before I place it in the CR.   I’m also thinking of submitting to the LabVIEW Tools Network.

     

    — James

  12. Here is a Beta version with a new “Treatment of Infinity/NaN†terminal on the JSON write functions.

    Three options:

    -- Default JSON: write all as nulls.
    -- Extended: use JSON strings for Infinities ("Infinity", "-Infinity"); NaN still null.
    -- LabVIEW extentions (compatable with the LabVIEW JSON primatives): Infinity, -Infinity, NaN (not in quotes)

    Note that the LabVIEW extentions are not valid JSON.  The Extended option IS valid JSON, but not all JSON parsers will process strings as numeric values.

     

    lava_lib_json_api-1.4.0.28.vip

    • Like 1
  13.  There's no reason to assume any other extensions would need to be mutually exclusive from the LV numeric extensions. An enum would enforce that exclusion.

    I mean alternate treatments of Inf, -Inf, and NaN.  I can think of alternatives that are arguably better than the choice made in the LabVIEW primitive.  Personally, I would choose NaN==Null, Inf/-Inf==“Infinityâ€/“-Infinity†(strings), as this would be valid JSON.  The LAVA JSON package should already be able to read this (I think).   Regardless, other JSON packages that we may want to interact with may have made other choices.

  14. Any chance of adding support for Inf and NaN to this API? I want to move off of the native LV prims because of their rigidity with data type and missing items in a JSON stream, but I can't let go of their support for these DBL values. Maybe you can add a property node to the JSON Value class that sets whether or not to allow these values in numeric fields?

    It looks easy enough to add a Boolean to enable use of NaN, Infinity, -Infinity.   It will have to be default False, though, as the default should be to meet the JSON standard.  I would like to see this myself, as I use JSON mostly LabVIEW-to-LabVIEW.   Maybe this should be an enum instead of a boolean, in case we want an alternate extension in the future?

    Edit: Let me clarify, since playing with the API further shows that there is some support for those values. When I try this code on a stream that doesn't have the "updated period " field in its clusters, I don't get NaN out. I get the default value for the data type, instead.

     

    O0634lT.png

     

    Not sure we can get default values of clusters in arrays (also, which array element should be used?  First? Same Index?).   As a work around, you can just index over the JSON Array elements in a FOR loop and convert each to a Cluster individually.  Then you can either provide the same cluster as default, or have an array of different default clusters.

  15. Out of curiosity, does storing a path as a string present platform problems? That is if I store "foo\bar.txt" in Windows, are the underlying primitives smart enough to change it to "foo/bar.txt" on a mac?

     

    No, but I’ll add the same NI off-pallet VIs used elsewhere in the library to convert to/from Mac paths: “Path to Command Line String†and “Command Line String to Pathâ€.  Thanks.

     

    Do you use LabVIEW on the Mac?  I haven’t in a long while and I could do with someone testing it.

     

    — James

  16. Attached is a beta version of the latest 1.6 version of SQLite Library, for anyone who like to give feedback.   A major addition (not yet well tested) is “Attributesâ€, modeled on Variant Attributes or Waveform Attributes, but stored in any SQLite db file.  The idea is to make it easy to store simple named parameters without much effort.  See the example “SQLite Attributes.viâ€.  

     

    A more minor upgrade is making “Execute SQL†polymorphic, so as to return data in a number of forms in addition to a 2D-array of strings.  See the upgraded example “SQLite Example 1 — Create Table.vi†which uses the new polymorphic VI, including showing how to return results as a Cluster.

     

    For Attributes, I had to make some choices in how to store the various LabVIEW types in SQLite’s limited number of types.   The format I decided on is:

    1) all simple types that already have a defined mapping (i.e. a “Bind†property node) are stored as defined (so strings and paths are Text, DBLs and Singles are Float, integers (except U64) are Integers.

    2) Timestamps are ISO-8601 Text (the most standardized format of the four possibilities)

    3) Enums are stored as the item text as Text, rather than the integer value.  This seems the most robust against changes in the enum definition.

    4) LVOOP objects are stored flattened in a Blob.

    5) any other LV type is, contained in a Variant, flattened and stored in a Blob.  Using a flattened Variant means we store the type information and LabVIEW version.

     

    post-18176-0-04139400-1448742257.png

     

    drjdpowell_lib_sqlite_labview-1.6.0.51.vip

    LabVIEW 2011-2015

     


    The Attribute stuff grew out of a project where SQLite files held the data, one for each “Runâ€, and the Runs had lots of small bits of information that needed to be stored in addition to the bulk of the data.   When and where the measurement was taken, what the equipment setup was, who the Operator was, etc.  I purpose-made a name-value look-up table for this, but realized that such a table could be made into reusable “attributesâ€.

    • Like 2
  17. The scientific notation is still not 7 digits.

    It also looks on the face of it like the precision of doubles is set ti 9 significant digits.rather than 15 decimal places.

    Good eye.  It’s because I had to read in the INI config file before writing this JSON, and so inherited the INI’s precision.   Here is a redo where I have manually typed in 123456789 as extra digits before JSON output:

    "AI Bridge Torque (Poly) Setup Cluster Array": [
        {
          "Bridge Information": {
            "Bridge Configuration": 10182,
            "Nominal Bridge Resistance": 350,
            "Voltage Excitation Source": 10200,
            "Voltage Excitation Value": 10
          },
          "Channel Name": "Bridge_Torq(Poly) - PCB Model 039201-53102 Serial 127511",
          "Channel Selected": true,
          "Custom Scale Name": "\u0000\u0000\u0000\u0000",
          "Fwd or Rev Coefs": "Use Fwd Coefs to Gen Rev Coefs",
          "Maximum Value": 1000,
          "Minimum Value": -1000,
          "Physical Channels": "\u0000\u0000\u0000\u000BMFDAQ-X/ai2",
          "Scaling Information": {
            "Electrical Units": 15897,
            "Forward Coeff": [
              -1.78569123456789E-5,
              0.00200225212345679,
              1.20519812345679E-9
            ],
            "Physical Units": 15884,
            "Reverse Coeff": [
              0.00891831812345679,
              499.437812345679,
              -0.150142012345679
            ]
          },
          "Units": 15884
        }
      ],
  18.  

    The attached zip includes the config.INI file along with the VIs and controls.  Also see the VI for the Supercluster itself.  The element I was referring to is as follows.

     

    AI Bridge Torque (Poly) Setup Cluster Array 0.Scaling Information.Forward Coeff = "<size(s)=3> -17.856503E-6 2.002252E-3 1.205196E-9"

     

     

    For comparison, here is the JSON-format of your config file:

     

    ConfigJSON.txt

     

    An excerpt: 

    "AI Bridge Torque (Poly) Setup Cluster Array": [
        {
          "Bridge Information": {
            "Bridge Configuration": 10182,
            "Nominal Bridge Resistance": 350,
            "Voltage Excitation Source": 10200,
            "Voltage Excitation Value": 10
          },
          "Channel Name": "Bridge_Torq(Poly) - PCB Model 039201-53102 Serial 127511",
          "Channel Selected": true,
          "Custom Scale Name": "\u0000\u0000\u0000\u0000",
          "Fwd or Rev Coefs": "Use Fwd Coefs to Gen Rev Coefs",
          "Maximum Value": 1000,
          "Minimum Value": -1000,
          "Physical Channels": "\u0000\u0000\u0000\u000BMFDAQ-X/ai2",
          "Scaling Information": {
            "Electrical Units": 15897,
            "Forward Coeff": [-1.78569E-5,0.002002252,1.205198E-9],
            "Physical Units": 15884,
            "Reverse Coeff": [0.008918318,499.437845,-0.150141962]
          },
          "Units": 15884
        }
      ],
  19.    The numbers being affected are doubles within arrays within a cluster within a cluster within an array within the Supercluster. 

    Wow, that's really pushing the INI file format; what do the files look like?  

     

    Personally, I switch to JSON format once I need to store arrays, let alone arrays within a cluster within a cluster within an array within a cluster.

     

    Note> the JSON package above saves DBLs with 15-digits.

     

     

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.