Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. Actually it's not! You are right that the shared library will refuse to work if your clock is set after June 2010 or so, by simply posting a dialog at runtime. But the reason of the original refnum problem is that this library makes use of so called user refnums. These are defined by resource files that get installed by the package and in order for those refnums to be valid the according resource files have to get loaded by LabVIEW, which it only does at startup (at least I'm not aware of any VI server method to do that at runtime too, like the refresh Palette method the VIPM uses after installation of a new package).

    I'l be having a look at the library soon and see if I can do anything to resurrect it, but feedback has been very limited, so I was simply assuming that nobody was using it.

    Please note that the SSL support of that library is really minimal. It allows to get an https: connection up and running but lacks any and all support to mo0dify properties and access methods of the SSL context to change its behavior of for instance add private certificates to it.

    I think its extremely useful and well worth maintaining.

    I noticed that in the readme you have a list of things TODO that I might be able to help with (would need the source and a bit of guidance though). The later versions of LabVIEW have SSL DLLs that might make things easier (from multi-platform point of view) although they seem very inflexible.

    What is the licencing?

  2. Any further improvements?

    Just has a cursory glance. But it looks like you are calculating the coefficients and passing the XY parms for the linear fit twice with the same data (it's only the weightings that change from the first "fit" to the second) . You could pre-calculate them in a separate loop and just pass them into the other loops. Also, you might benefit from passing through the x array (through the coefficient vi).

  3. A lot of stuff you need for the network "architecture" is probably contained in the Dispatcher in the CR. It can do the "clustering" by simply placing the dispatcher in the right place and pointing the publishers and subscribers to it (can be on the same machine or centralised and you can have multiple dispatchers spread out accross many machines). What you send and what you do with it is then up to your implementation.

    I wouldn't suggest UDP for this, however, unless you are going to write a robust protocol on top-which is a lot of work and mitigates a lot of the advantages.

  4. I would guess that deal created over a half million in sales for NI. It was more a trial than anything but for that amount, I don't expect to have to beg for real support. We had an FAE at the time and he was given every opportunity to help us locate someone. If this would have been the case of a one off prototype, my expectations would have been different. All water under the bridge now, we learned our lesson the hard way.

    In my case it was one FPGA card (~$4K). However. That reminds me of the other trick. Ask for a loaner to test before you buy if you have never used the product before (they've always got a couple for demos and conferences kicking around). It concentrates a sales reps mind :D They tend to be focused on "potential" sales rather than "previous" sales, but they can pull the strings.

  5. I don’t mean utility VIs for the User of the API, rather, I mean “utility” for writing the package internally. Conversion to/from valid JSON string format will be required in multiple places. I tend to call subVIs, needed by the class methods, but not themselves using those classes in any way, “Utility” subVIs.

    There’s a good 30+ VIs in dependancies. Copying all that to support my variant-to-JSON stuff is excessive. Compare it with just changing the one “remove whitespace” subVI to make the rest of the package independent. But as I said, it should be easy to make the variant stuff an optional add-on, for those who don’t mind adding a couple of OpenG packages.

    I don't think there are that many. Guess I'll have to install them again and find out :lol:

    Is it OK to put unfinished stuff in the CR, even uncertified? I’m afraid I’m about to go on two weeks vacation, but I could put what we have to this point in the CR and commit some free time finishing it when I get back. Don’t whip up a thousand and one pretty polymorphic instances until we get the core stuff finished. :D

    I would. There are a lot less "working" things in there already :) Alternatively. Start a new thread.

    At some point I switched from not using OpenG if possible, to considering it “standard LabVIEW”. VIPM making it so easy probably contributed to this shift.

    Indeed. I have to be very careful about dependencies. Some clients insist on "approved vendors" or "no 3rd Party/open source" and, for most of the stuff in OpenG that I would use; I have my own versions that I've built up over the years. It's just easier not to use it than get bogged down in lengthy approval processes.

  6. The package needs a pair of utility VIs that convert strings to/from the JSON valid form (in quotes, backslash control characters, possible unicode encoding).

    Do they need to be utility VIs? We can detect control chars (they would break the lookup, I think, so need to be removed) and to escape the Flatten could just have a boolean. Not really sure what you have in mind though.

    The variant-to-JSON stuff could be kept separate as an optional feature that requires OpenG (a lot of work to rewrite that without OpenG). Otherwise, I think I just used the faster version of “Trim Whitespace”, easily replaced.

    If I remember correctly, as long as you keep the copywrite on the VI and. perhaps, the documentation, you can use, modify and do pretty much what you like with them (someone in the OpenG team could advise). It may be possible to rename (so they don't clash) just the variant stuff (there's only a couple) and include them in the package so it is then completely self contained with no dependancies

    Might I suggest you place it in the unconfirmed CR so that we can make a list of things that need to be done and to manage it?. We've been rather obnoxious in hijacking JZollers thread-my apologies JZoller! :worshippy:

    I have heaps of reuse installed so that I can draw on it naturally. If I need to, I just do a project analysis afterwards to audit what was used. If anyone's going to, I would expect you to use a totally different paradigm. :D

    Yeah. There is a re-use library consisting of about 10 VIs for mundane stuff. That's about all I need from project to project :). Everything else is self contained APIs

  7. Though I think you didn’t need “To String”, as “Flatten” does the exact same thing. I never thought of using the JSON string form internally to make the outer polymorphic API easier. Great idea.

    Nearly. Flatten adds things like quotes and brackets. For conversion, these need to be removed. Whilst I dare say you could make it work that way, I wanted to leave most of your stuff as-is and "add" rather than change if at all possible.

    Not sure how many are still reading. :rolleyes:

    Put it in the CR and see how many downloads :).

    Don’t like the OpenG stuff? I love the Variant DataTools.

    It's not a case of liking. There's some great stuff in there. It's a case that not everyone can use OpenG stuff. It's also not really appropriate to expect someone to install a shedload of 3rd party stuff that isn't required just to use a small API (I had to install OpenG especially just to look at your code and uninstall it afterwards)

  8. Well. Here's my experience.........

    I was working on an FPGA and we wanted to transfer huge amounts of data from a 3rd party FPGA aquisition board, accross the PXI backplane, to an NI board for crunching. We couldn't use the NI streaming VIs since the technology is proprietary and NI wouldn't liaise with the 3rd party so they could implement it in their FPGA (which is fair enough). However. NI said that they could DMA at about 700MB/Sec in each direction (1.5 GB/sec) across the back-plane which was "good enough for our team".

    The only problem was that all examples never addressed this sort of throughput apart from mentioning that, under the right conditions, it was possible.

    So long story short. The local NI rep hooked me up with the UK FPGA guru. I sent through an example of what we wanted to do (with which I was getting about 70MB/Sec) and he sent through a modified version with comments about where and what was important in my example for getting the throughput. It could do it at 735MB/Sec (each direction). He also sent me through an internal (not for distribution) benchmark document of all the NI PXI controllers. what their capabilities where, what measured throughput's could be obtained, with what back-planes and which board positions within the rack (which is important).

    Saying all that, It did take me two weeks to get through to him. I had to go through the "correct channels" first before the NI rep had a good excuse to "escalate" the issue through the system. The key is really building up a contacts list of direct dial numbers to the right people. If you know what you are talking about, they will be happy to take your call as they know it's not a silly problem. NIs problem is that there are too many inexperience people calling support for trivial things and, unfortunately for us, their system has been setup so that the engineers are well buffered from this.

    • Like 1
  9. Breaks parser:

    Backslash quotes \” in strings (eg. "And so I said, \"Hello.\””)

    Sort of breaks:

    U64 and Extended precision numbers, since you convert numbers to DBL internally. Note that in both my and Shaun’s prototypes, we keep the numbers in string form until the User specifies the format required.

    Possible issue?:

    NaN, Inf and -Inf: valid numeric values that aren’t in the JSON standard. Might be an idea to add them as possible JSON values. Or otherwise decide what to do with them when you write code to turn LabVIEW numerics into JSON (eg. NaN would be “Null”).

    — James

    Sweet. Only the boring parts to go then :)

    I made a slight change to your lookup by adding a "To String" in each of the classes to be overridden. This means that the polymorphic VIs become very simple (Not mention that I could just replace my lookup with yours, change terminals and, hey presto, all the polys I've already created, with icons, slot straight in :) ).

    I've added U8,U16, U32, U64, I8, I16, I32, I64, String, String Array, Double Array and Boolean.

    (I've back-saved it to 2009 so others can play although the Hi Res timer isn't available so the benchmark test wont work)

    Next on my list is to get rid of the OpenG stuff.

  10. One is a hardware device where the people at NI actually sat down and wrote a specific driver for it by people who have some serious device driver development experience, the other is usually a minimalized copy of the reference design from the chip manufacturer with often a completely unaltered device driver from the same chip manufacturer.

    Don't forget the support! Support for NI devices is second-to-none. It is this you are truly paying for.

    • Like 1
  11. What I mean by “abstraction layers” is that no level of code should be handling that many levels of JSON. In your example the same code that knows what a “glossary” is also knows how “GlossSeeAlso” is stored, five levels down deep.

    Not quite.

    The code knows nothing. It doesn't know what a glossary IS only that It is a field name it should look up for the programmer- it just gets what the programmer asks for. If the JSON structure changes, no changes to the API are needed. It doesn't care what the structure of the JSON object is, it's just an accessor to the fields within the JSON object - any JSON object.

    For example, imagine an “experiment setup” JSON object that contains a list of “instrument setup” objects corresponding to the different pieces of equipment. The code to setup the experiment could increment over this list and pass the "equipment setup” objects to the corresponding instrument code. The full JSON object could be very complex with many levels, but to the higher-level code it looks simple; just an array of generic things. And each piece of lower-level code is only looking at a subset of the full JSON object. No individual part of the code should be dealing with everything.

    There is nothing stopping you doing this, but this isn't the responsibility of a parser. There is nothing to stop you creating an "object" output polymorphic case for your “experiment setup” (or indeed a whole bunch of them), you just need to tell it what fields it consists of and add the terminal. However. That polymorphic case will be fixed and specific to your application, and not reusable on other projects (as it is with direct conversion to variant clusters). What is more likely, however, is that your class accessors (Get) will just call one of the polymorphic VIs with the appropriate tag when you need to get the value out.

    I think you just need a better lookup and you'll be there! (with bells on) ;) No need to go complicating it further by making the programmer write reams of application specific code just to get a value out for the sake of "objectness"

  12. post-18176-0-54717900-1349100761_thumb.p

    This is the "problem" as I was outlining it earlier. You have now hard-coded the retrieval of the value based on the structure of the entire stream.

    If one does a lot of digging things out multiple object levels deep, then one could build something on top of this base that, say, uses some formatting to specify the levels (e.g. "Nest>>NestArray” as the name). But if one is using abstraction layers in one’s code, one won’t be doing that very often, as to each layer of code the corresponding JSON should appear quite simple. And I think it is more important to build in the inherent recursion of JSON in at the base, rather than a great multi-level lookup ability.

    The former is is preferable from a genericism point of view. The latter, I think, is inflexible (I use my infamous "->" by the way).

    post-18176-0-54717900-1349100761_thumb.p

    Here, for example is another extension: a VI to convert any (OK, many) LabVIEW types into corresponding JSON. It leverages OpenG variant tools. It was very easy to make it work on nested clusters, because it just recursively walks along the cluster hierarchy and builds a corresponding JSON Object hierarchy.

    post-18176-0-86058500-1349101816.png

    —James

    JSON drjdpowell V3.zip

    Yup. Getting it in is OK. Like I said. Getting it out again in a generic way so that you don't "hard-code" it in is the tricky bit.

    I'll also have to take a look at Tons thingy since he is flattening to display. I can then use JZollers parser :).

  13. I must code pretty slow. This took me 2-3 whole hours:

    You included icons :)

    JSON drjdpowell.zip

    Reads in or writes out JSON of any type, with nesting. One would still need to write methods to get/set the values or otherwise do what you want with it. And add code to check for invalid JSON input.

    — James

    Added later with methods written to allow an example of getting an array of doubles extracted from a JSON Object:

    JSON drjdpowell V2.zip

    post-18176-0-15160600-1349095247_thumb.p

    Rather verbose. But one can wrap it in a “Get Array of DBL by name” method of JSON Object if you want.

    Indeed. It is the getting the value back out that is the problem. Same as with variants/clusters.

    It's getting interesting now, however :)

    How about a slightly modified JSON of one of your examples? (Get the "NestArray" Values)

    {"T1":456.789 , "T2":"test2", "Nest":{"ZZ":123,"NestArray":[1.3,2.4,1.6] }}

    I don't think it is sufficient to simply have a look-up as you have here, but it is close.

  14. I've never looked at the actual bitwise representation of a timestamp, how sure are you of this? I've read the whitepaper Phillip linked before and that pretty much cemented a 16-byte representation. Their interpretation examples seem to contest what you're saying.

    Hmmm. Not sure where I got that from. Certainly in the LabVIEW Timestamp Whitepaper I just found it shows it is indeed 128 bit so I;m obviously wrong. But I have recollections of it being 12 bytes as it was one of the improvements (adding a timestamp) to the Transport.lib (which after some research I made 12 bytes). Since then it's just stuck as one of those anomalies to my expectations since 12 is a bizarre number.

  15. So’s Joe’s design, now that I look at it. Though your one seems more like his “flattened variant”; how are you going to do the nesting?

    Well. A simple way (but I can think of better, more complicated ones- linked, list, variant lookups et al) is to make the column in the 2D array significant and use a hierarchical tag e.g. "first:second:third". But this is inefficient (for lookups - although probably not prohibitively so) and requires a much more complex parser than I'm prepared to write at the moment (which is where JZollers stuff comes in ;) ). I'm hoping someone has a "slick" aproach they've used in the past that we could perhaps just drop in :) The intermediary format is really a secondary consideration apart from it needs to be easily searchable, structure agnostic and not make the parser overly complicated just for account for "type". A 2D array of strings is just very good for this particularly as the input is a string and requires string manipulation to extract the data (regex gurus apply here...lol)

    Don't forget my comments aren't trying to address the existing code or how it's coded,per se. It's a limitation I perceive with using clusters and variants (or more specifically, variant clusters) as the interfaces.

  16. Some things perhaps you didn't know about the timestamp that may shed some light.

    In 2009 it is also 1ms. The 14ms you are talking about is probably that you were using Windows XP where the timeslice was about 15 ms (windows 2000 was ~10ms). .

    LabVIEW timestamps are 12 byte (96 bit not 128). The upper 4 bytes are not used and are always zero.

  17. Thoughts:

    If I were approaching this problem, I would create a LabVIEW datatype that matched the recursive structure of JSON. Using LVOOP, I would have the following classes:

    Parent: "JSON Value”: the parent of three other classes (no data items)

    Child 1: “JSON Scaler”: holds a “scaler” —> string, number, true, false, null (in string form; no need to convert yet)

    Child 2: “JSON Array”: array of JSON Values

    Child 3: “JSON Object”: set name/JSON Value pairs (could be a Variant Attribute lookup table or some such)

    If I’m not missing something, this structure one-to-one matches the JSON format, and JSON Value could have methods to convert to or from JSON text format. Plus methods to add, set, delete, or query its Values. Like Shaun, I would have the user specify the LabVIEW type they want explicitly and never deal in Variants.

    — James

    This is exactly what my example is (analogously - Classic LV to LVPOOP).

    The 2D intermediate string array is Child2 with each row being Child1 and "lookup.vi" is the accessor (Child3). The parent is the DVR. Only we don't need all the extra "bloat" that classes demand. I expect if you were to lay down an example, the internal vis that do all the the real work in your classes (there are only two) would look remarkably similar. If you want a class implementation, then you might be better off by looking at AQs.

    (I could have also represented the nesting aspect by making the column of the 2D significant. But I think there may be a better way.).

  18. Me too. I was hoping someone would come up with a generic solution before I had to, as my first encounter proved it was a "non-trivial" problem. :D

    These were my thoughts about Json and LabVIEW in general from the first skirmish........

    <snip>

    OK. I thought I'd put some meat on my thoughts and try a proof of concept that people could play with, poke fingers at and demolish. I'm not in anyway trying to divert from the sterling work of Jzoller but I hope that perhaps some of the thoughts might light a bulb that can ease further development of his library. Don't get your hopes up that I will develop it further as JZollers library is the end-goal.

    Of course. It looks like crap, doesn't work properly (the parser is, how you say, "basic") and the "nesting" still needs to be addressed since it cannot cope with non-unique identifiers (I do have a solution, but would rather hear others first) So don't expect too much because, as I said, it only a "Proof of Concept".

  19. — A User Event can be thought of as an array of queues; firing the Event is the same as enqueuing to all the queues, and the Event Registration Node serves to add its queue to the array. When created, of course, the User Event is an empty array.

    Whilst the mechanics may be thought of in that way and indeed, both may be coerced to emulate the properties of the other, they are actually different topologies. Queues are "many-to-one" and events are "one-to-many".

    — a “name” is a reference, the same as a refnum.

    ....but with no type!

    — “subpaneling” is a silly term :)

    Agreed :)

    In rare cases the event queue can get muddled if the events are sent more quickly than the timestamp resolution can take care of. This leads to the events possibly being received IN THE WRO_NG ORDER. This is within the same event queue, and can be provoked even with the same event.

    It has been reported and apparently the timestamp will include a more robust mechanism in future to avoid this....

    In this regard, the queue vs events comparison gets a bit weird as the order in a Queue CANNOT be muddled (AFAIK).

    Shane.

    Ps I'm still a big event fan though.

    I've been bitten by these sorts of things in the past too.....they are a real bugger to track down and the solution is invariably to use a queue instead.

  20. I'd like to see a JSON library mature.

    Me too. I was hoping someone would come up with a generic solution before I had to, as my first encounter proved it was a "non-trivial" problem. :D

    These were my thoughts about Json and LabVIEW in general from the first skirmish........

    The thing about using variants (the feature that never was :P ) and clusters is that it requires a detailed knowledge of the structure of the entire Json stream at design-time when reconstituting and getting back into LabVIEW (not an issue when converting "to" Json). We are back to the age-old problem of LabVIEW strict typing without run-time polymorphic variant conversion.

    To get around this so that it could be used in a run-time, on-the-fly sort of way, I eventually decided that maybe it was better to flatten the Json to key/value string pairs (here I go with my strings again...lol) that could then be used as a look-up table. Although this still requires prior knowledge of the value type if you then convert a value to, say, a double - it doesn't require the whole Json structure to be known in advance and instead converts it to a sort of intermediate ini file which simplifies the parser (don't need to account for every labview type in the parser). In this form, it is easier to digest in LabVIEW with a simple tag look-up which can be wrapped in a polymorphic VI if "adapt-to-type" is required. It also means, but perhaps a bit out of scope for consideration, that you can just swap the parser out to another (e.g. XML).

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.