Jump to content

Reading JSON into LabVIEW


jzoller

Recommended Posts

Hi all,

I've put together a small library to convert JSON formatted messages into something more useful in LabVIEW: variant attributes. It's still in progress (more scripting to go...), but you can see it at: http://code.google.com/p/jsonlv/downloads/list. Grab and unzip the .zip, open the json_example.vi, and work from there.

Because JSON is a pretty variable format, I'm looking for JSON (valid or not) that breaks the parser. If anyone wants to give it a try, please feel free to contact me here, or at joe@underflowsoftware.com.

(Side note: LV crashes if you try to create a very, very deeply nested cluster using scripting. So... don't do that!)

Thanks,

Joe Z.

Link to comment

I'd like to see a JSON library mature.

Me too. I was hoping someone would come up with a generic solution before I had to, as my first encounter proved it was a "non-trivial" problem. :D

These were my thoughts about Json and LabVIEW in general from the first skirmish........

The thing about using variants (the feature that never was :P ) and clusters is that it requires a detailed knowledge of the structure of the entire Json stream at design-time when reconstituting and getting back into LabVIEW (not an issue when converting "to" Json). We are back to the age-old problem of LabVIEW strict typing without run-time polymorphic variant conversion.

To get around this so that it could be used in a run-time, on-the-fly sort of way, I eventually decided that maybe it was better to flatten the Json to key/value string pairs (here I go with my strings again...lol) that could then be used as a look-up table. Although this still requires prior knowledge of the value type if you then convert a value to, say, a double - it doesn't require the whole Json structure to be known in advance and instead converts it to a sort of intermediate ini file which simplifies the parser (don't need to account for every labview type in the parser). In this form, it is easier to digest in LabVIEW with a simple tag look-up which can be wrapped in a polymorphic VI if "adapt-to-type" is required. It also means, but perhaps a bit out of scope for consideration, that you can just swap the parser out to another (e.g. XML).

Edited by ShaunR
  • Like 1
Link to comment

Me too. I was hoping someone would come up with a generic solution before I had to, as my first encounter proved it was a "non-trivial" problem. :D

These were my thoughts about Json and LabVIEW in general from the first skirmish........

<snip>

OK. I thought I'd put some meat on my thoughts and try a proof of concept that people could play with, poke fingers at and demolish. I'm not in anyway trying to divert from the sterling work of Jzoller but I hope that perhaps some of the thoughts might light a bulb that can ease further development of his library. Don't get your hopes up that I will develop it further as JZollers library is the end-goal.

Of course. It looks like crap, doesn't work properly (the parser is, how you say, "basic") and the "nesting" still needs to be addressed since it cannot cope with non-unique identifiers (I do have a solution, but would rather hear others first) So don't expect too much because, as I said, it only a "Proof of Concept".

Link to comment

Thoughts:

If I were approaching this problem, I would create a LabVIEW datatype that matched the recursive structure of JSON. Using LVOOP, I would have the following classes:

Parent: "JSON Value”: the parent of three other classes (no data items)

Child 1: “JSON Scaler”: holds a “scaler” —> string, number, true, false, null (in string form; no need to convert yet)

Child 2: “JSON Array”: array of JSON Values

Child 3: “JSON Object”: set name/JSON Value pairs (could be a Variant Attribute lookup table or some such)

If I’m not missing something, this structure one-to-one matches the JSON format, and JSON Value could have methods to convert to or from JSON text format. Plus methods to add, set, delete, or query its Values. Like Shaun, I would have the user specify the LabVIEW type they want explicitly and never deal in Variants.

— James

Link to comment

Thoughts:

If I were approaching this problem, I would create a LabVIEW datatype that matched the recursive structure of JSON. Using LVOOP, I would have the following classes:

Parent: "JSON Value”: the parent of three other classes (no data items)

Child 1: “JSON Scaler”: holds a “scaler” —> string, number, true, false, null (in string form; no need to convert yet)

Child 2: “JSON Array”: array of JSON Values

Child 3: “JSON Object”: set name/JSON Value pairs (could be a Variant Attribute lookup table or some such)

If I’m not missing something, this structure one-to-one matches the JSON format, and JSON Value could have methods to convert to or from JSON text format. Plus methods to add, set, delete, or query its Values. Like Shaun, I would have the user specify the LabVIEW type they want explicitly and never deal in Variants.

— James

This is exactly what my example is (analogously - Classic LV to LVPOOP).

The 2D intermediate string array is Child2 with each row being Child1 and "lookup.vi" is the accessor (Child3). The parent is the DVR. Only we don't need all the extra "bloat" that classes demand. I expect if you were to lay down an example, the internal vis that do all the the real work in your classes (there are only two) would look remarkably similar. If you want a class implementation, then you might be better off by looking at AQs.

(I could have also represented the nesting aspect by making the column of the 2D significant. But I think there may be a better way.).

Edited by ShaunR
Link to comment

So’s Joe’s design, now that I look at it. Though your one seems more like his “flattened variant”; how are you going to do the nesting?

Well. A simple way (but I can think of better, more complicated ones- linked, list, variant lookups et al) is to make the column in the 2D array significant and use a hierarchical tag e.g. "first:second:third". But this is inefficient (for lookups - although probably not prohibitively so) and requires a much more complex parser than I'm prepared to write at the moment (which is where JZollers stuff comes in ;) ). I'm hoping someone has a "slick" aproach they've used in the past that we could perhaps just drop in :) The intermediary format is really a secondary consideration apart from it needs to be easily searchable, structure agnostic and not make the parser overly complicated just for account for "type". A 2D array of strings is just very good for this particularly as the input is a string and requires string manipulation to extract the data (regex gurus apply here...lol)

Don't forget my comments aren't trying to address the existing code or how it's coded,per se. It's a limitation I perceive with using clusters and variants (or more specifically, variant clusters) as the interfaces.

Edited by ShaunR
Link to comment

There's a localization bug, on european systems (where the ',' is the decimal sign). Decimal number are turned into integers....

Quick fix:

post-2399-0-63659600-1348913578.png

(or my clone at google code:

https://code.google....tcplomp-jsonlv/)

Edit:

Your example JSON code, was something that my VariantProbe couldn't deal with.

Some debugging showed that your array ([1,2,3]) was stored as a void datatype. I modified the json_array to store the array as the variant, and the elements as attributes. This returned the following tree:

post-2399-0-62698100-1348921080.png

That also made the 'scripted' Json cluster nicer:

Array as Attributes

post-2399-0-15296400-1348921479.png

Array as Element

post-2399-0-81738400-1348921557.png

Ton

Edited by Ton Plomp
Link to comment

Yup. Keep drinking the cool-ade :D It probably took me the same amount of time to write the concept as it did for you to read my posts...lol

I must code pretty slow. This took me 2-3 whole hours:

JSON drjdpowell.zip

Reads in or writes out JSON of any type, with nesting. One would still need to write methods to get/set the values or otherwise do what you want with it. And add code to check for invalid JSON input.

— James

Added later with methods written to allow an example of getting an array of doubles extracted from a JSON Object:

JSON drjdpowell V2.zip

post-18176-0-15160600-1349095247_thumb.p

Rather verbose. But one can wrap it in a “Get Array of DBL by name” method of JSON Object if you want.

Link to comment

I must code pretty slow. This took me 2-3 whole hours:

You included icons :)

JSON drjdpowell.zip

Reads in or writes out JSON of any type, with nesting. One would still need to write methods to get/set the values or otherwise do what you want with it. And add code to check for invalid JSON input.

— James

Added later with methods written to allow an example of getting an array of doubles extracted from a JSON Object:

JSON drjdpowell V2.zip

post-18176-0-15160600-1349095247_thumb.p

Rather verbose. But one can wrap it in a “Get Array of DBL by name” method of JSON Object if you want.

Indeed. It is the getting the value back out that is the problem. Same as with variants/clusters.

It's getting interesting now, however :)

How about a slightly modified JSON of one of your examples? (Get the "NestArray" Values)

{"T1":456.789 , "T2":"test2", "Nest":{"ZZ":123,"NestArray":[1.3,2.4,1.6] }}

I don't think it is sufficient to simply have a look-up as you have here, but it is close.

Link to comment

How about a slightly modified JSON of one of your examples? (Get the "NestArray" Values)

{"T1":456.789 , "T2":"test2", "Nest":{"ZZ":123,"NestArray":[1.3,2.4,1.6] }}

I don't think it is sufficient to simply have a look-up as you have here, but it is close.

post-18176-0-54717900-1349100761_thumb.p

If one does a lot of digging things out multiple object levels deep, then one could build something on top of this base that, say, uses some formatting to specify the levels (e.g. "Nest>>NestArray” as the name). But if one is using abstraction layers in one’s code, one won’t be doing that very often, as to each layer of code the corresponding JSON should appear quite simple. And I think it is more important to build in the inherent recursion of JSON in at the base, rather than a great multi-level lookup ability.

Here, for example is another extension: a VI to convert any (OK, many) LabVIEW types into corresponding JSON. It leverages OpenG variant tools. It was very easy to make it work on nested clusters, because it just recursively walks along the cluster hierarchy and builds a corresponding JSON Object hierarchy.

post-18176-0-86058500-1349101816.png

—James

JSON drjdpowell V3.zip

Link to comment

post-18176-0-54717900-1349100761_thumb.p

This is the "problem" as I was outlining it earlier. You have now hard-coded the retrieval of the value based on the structure of the entire stream.

If one does a lot of digging things out multiple object levels deep, then one could build something on top of this base that, say, uses some formatting to specify the levels (e.g. "Nest>>NestArray” as the name). But if one is using abstraction layers in one’s code, one won’t be doing that very often, as to each layer of code the corresponding JSON should appear quite simple. And I think it is more important to build in the inherent recursion of JSON in at the base, rather than a great multi-level lookup ability.

The former is is preferable from a genericism point of view. The latter, I think, is inflexible (I use my infamous "->" by the way).

post-18176-0-54717900-1349100761_thumb.p

Here, for example is another extension: a VI to convert any (OK, many) LabVIEW types into corresponding JSON. It leverages OpenG variant tools. It was very easy to make it work on nested clusters, because it just recursively walks along the cluster hierarchy and builds a corresponding JSON Object hierarchy.

post-18176-0-86058500-1349101816.png

—James

JSON drjdpowell V3.zip

Yup. Getting it in is OK. Like I said. Getting it out again in a generic way so that you don't "hard-code" it in is the tricky bit.

I'll also have to take a look at Tons thingy since he is flattening to display. I can then use JZollers parser :).

Edited by ShaunR
Link to comment

What I mean by “abstraction layers” is that no level of code should be handling that many levels of JSON. In your example the same code that knows what a “glossary” is also knows how “GlossSeeAlso” is stored, five levels down deep.

For example, imagine an “experiment setup” JSON object that contains a list of “instrument setup” objects corresponding to the different pieces of equipment. The code to setup the experiment could increment over this list and pass the "equipment setup” objects to the corresponding instrument code. The full JSON object could be very complex with many levels, but to the higher-level code it looks simple; just an array of generic things. And each piece of lower-level code is only looking at a subset of the full JSON object. No individual part of the code should be dealing with everything.

BTW> I see there is another recent JSON attempt here. They use Variants.

Edited by drjdpowell
Link to comment

What I mean by “abstraction layers” is that no level of code should be handling that many levels of JSON. In your example the same code that knows what a “glossary” is also knows how “GlossSeeAlso” is stored, five levels down deep.

Not quite.

The code knows nothing. It doesn't know what a glossary IS only that It is a field name it should look up for the programmer- it just gets what the programmer asks for. If the JSON structure changes, no changes to the API are needed. It doesn't care what the structure of the JSON object is, it's just an accessor to the fields within the JSON object - any JSON object.

For example, imagine an “experiment setup” JSON object that contains a list of “instrument setup” objects corresponding to the different pieces of equipment. The code to setup the experiment could increment over this list and pass the "equipment setup” objects to the corresponding instrument code. The full JSON object could be very complex with many levels, but to the higher-level code it looks simple; just an array of generic things. And each piece of lower-level code is only looking at a subset of the full JSON object. No individual part of the code should be dealing with everything.

There is nothing stopping you doing this, but this isn't the responsibility of a parser. There is nothing to stop you creating an "object" output polymorphic case for your “experiment setup” (or indeed a whole bunch of them), you just need to tell it what fields it consists of and add the terminal. However. That polymorphic case will be fixed and specific to your application, and not reusable on other projects (as it is with direct conversion to variant clusters). What is more likely, however, is that your class accessors (Get) will just call one of the polymorphic VIs with the appropriate tag when you need to get the value out.

I think you just need a better lookup and you'll be there! (with bells on) ;) No need to go complicating it further by making the programmer write reams of application specific code just to get a value out for the sake of "objectness"

Edited by ShaunR
Link to comment

The code knows nothing. It doesn't know what a glossary IS only that It is a field name it should look up for the programmer- it just gets what the programmer asks for. If the JSON structure changes, no changes to the API are needed. It doesn't care what the structure of the JSON object is, it's just an accessor to the fields within the JSON object - any JSON object.

Rephrase as with respect to the programmer, then; the programmer shouldn't have to understand the entire application and data structure from high-level to low-level at the same time.

There is nothing stopping you doing this, but this isn't the responsibility of a parser. There is nothing to stop you creating an "object" output polymorphic case for your “experiment setup” (or indeed a whole bunch of them), you just need to tell it what fields it consists of and add the terminal. However. That polymorphic case will be fixed and specific to your application, and not reusable on other projects (as it is with direct conversion to variant clusters).

Sorry, I ment JSON “Objects”, not application-specific LVOOP objects. No custom code needed.

I think you just need a better lookup and you'll be there! (with bells on) ;)

One could certainly write a multi-level lookup API on top of what I have already. Should be quite easy (though tedious with all the polymorphic instances). Wasted too many hours on this today, though. I don’t have any projects that actually need JSON. :rolleyes:

— James

Link to comment

I think you just need a better lookup and you'll be there! (with bells on) ;)

A little free time this morning:

post-18176-0-75919700-1349182736.png

Used arrays, but you could use some parsable string format like “->”. The polymorphic VI currently has only one instance of the many, many it would need. The lower part shows selection of a subset of the JSON that can be passed generically to lower code layers.

— James

JSON drjdpowell V4.zip

Link to comment

Because JSON is a pretty variable format, I'm looking for JSON (valid or not) that breaks the parser. If anyone wants to give it a try, please feel free to contact me here, or at joe@underflowsoftware.com.

Breaks parser:

Backslash quotes \” in strings (eg. "And so I said, \"Hello.\””)

Sort of breaks:

U64 and Extended precision numbers, since you convert numbers to DBL internally. Note that in both my and Shaun’s prototypes, we keep the numbers in string form until the User specifies the format required.

Possible issue?:

NaN, Inf and -Inf: valid numeric values that aren’t in the JSON standard. Might be an idea to add them as possible JSON values. Or otherwise decide what to do with them when you write code to turn LabVIEW numerics into JSON (eg. NaN would be “Null”).

— James

Link to comment

Breaks parser:

Backslash quotes \” in strings (eg. "And so I said, \"Hello.\””)

Sort of breaks:

U64 and Extended precision numbers, since you convert numbers to DBL internally. Note that in both my and Shaun’s prototypes, we keep the numbers in string form until the User specifies the format required.

Possible issue?:

NaN, Inf and -Inf: valid numeric values that aren’t in the JSON standard. Might be an idea to add them as possible JSON values. Or otherwise decide what to do with them when you write code to turn LabVIEW numerics into JSON (eg. NaN would be “Null”).

— James

Sweet. Only the boring parts to go then :)

I made a slight change to your lookup by adding a "To String" in each of the classes to be overridden. This means that the polymorphic VIs become very simple (Not mention that I could just replace my lookup with yours, change terminals and, hey presto, all the polys I've already created, with icons, slot straight in :) ).

I've added U8,U16, U32, U64, I8, I16, I32, I64, String, String Array, Double Array and Boolean.

(I've back-saved it to 2009 so others can play although the Hi Res timer isn't available so the benchmark test wont work)

Next on my list is to get rid of the OpenG stuff.

Edited by ShaunR
Link to comment

I made a slight change to your lookup by adding a "To String" in each of the classes to be overridden. This means that the polymorphic VIs become very simple (Not mention that I could just replace my lookup with yours, change terminals and, hey presto, all the polys I've already created, with icons, slot straight in :) ).

Nicely done!

Though I think you didn’t need “To String”, as “Flatten” does the exact same thing. I never thought of using the JSON string form internally to make the outer polymorphic API easier. Great idea.

I've back-saved it to 2009 so others can play

Not sure how many are still reading. :rolleyes:

Next on my list is to get rid of the OpenG stuff.

Don’t like the OpenG stuff? I love the Variant DataTools.

— James

Link to comment

Though I think you didn’t need “To String”, as “Flatten” does the exact same thing. I never thought of using the JSON string form internally to make the outer polymorphic API easier. Great idea.

Nearly. Flatten adds things like quotes and brackets. For conversion, these need to be removed. Whilst I dare say you could make it work that way, I wanted to leave most of your stuff as-is and "add" rather than change if at all possible.

Not sure how many are still reading. :rolleyes:

Put it in the CR and see how many downloads :).

Don’t like the OpenG stuff? I love the Variant DataTools.

It's not a case of liking. There's some great stuff in there. It's a case that not everyone can use OpenG stuff. It's also not really appropriate to expect someone to install a shedload of 3rd party stuff that isn't required just to use a small API (I had to install OpenG especially just to look at your code and uninstall it afterwards)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.