Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. Are you suggesting that if the JSON stream has a double and an Int is requested then it should throw an error?

    Oh no, definitely not that.

    I mean what if the User requests:

    — the string “Hello” as a DBL? Is this NaN, zero, or an error? What about as an Int32? A timestamp?

    — for that matter, what about a boolean? Should anything other than ‘true’/‘false' be an error? Any non-‘null'/non-‘false' be true (including the JSON strings “null” and false”)? Or any non ‘true' be false (even the JSON string “true”)?

    — “1.2Hello” as a DBL? Is this 1.2 or an error?

    — or just “1.2”, a JSON string, not a JSON numeric? Should we (as we are doing) allow this to be converted to 1.2?

    — a JSON Object as an Array of DBL? A “Not an array” error, or an array of all the Objects items converted to DBL?

    — a JSON Scalar as an Array of DBL? Error or a single element array?

    — a JSON Object as a DBL? Could return the first item, but Objects are unordered, so “first” is a bit problematic.

    And what if the User asks for an item by name from:

    — an Object that doesn’t have that named item? Currently this is no error, but we have a “found” boolean output that is false.

    — an Array or Scalar? Could be an Error, or just return false for “found”.

    Then for the JSON to Variant function there is:

    — cluster item name not present in the JSON Object: an error or return the default value

    Personally, I think we should give as much “loose-typing” as possible, but I’m not sure where the line should be drawn for returning errors.

  2. I’m talking about errors in the conversion from our “JSON” object into LabVIEW datatypes. There are also errors in the initial interpretation of the JSON string (missing quotes, or whatever); there we will definitely need the throw errors, with meaningful information about where the parsing error occurred in the input JSON string.

    For debugging type conversion problems, one can use the custom probes to look at the sub-JSON objects fed into the “Get as…”; this will be a subset of the full initial JSON string.

    BTW, heres the previous example where I’ve introduced an error into the Timestamp format (and probed the value just before the “Get”):

    post-18176-0-17074200-1351790676_thumb.p

  3. Actually, another possible error choice is to basically never throw an error on “Get”; just return a “null” (or zero, NaN, empty string, etc.) if there is no way to convert the input JSON to a meaningful value of that type (this follows the practice of SQLite, which always provides value regardless of a mismatch between requested and stored data type). Then perhaps all “Get” instances should have a “found” boolean output.

    Ton, Shaun, what do you think?

  4. Just added some improvements to the bitbucket repo. Below is the new “Example Extract… V2” example using the polymorphic Get function. Note that there is no object type casting now, as the function do the casting themselves (throwing an error if the input type is invalid).

    post-18176-0-05225700-1351770416_thumb.p

    I’ve also been working on meaningful error messages. On issue I’d like comment on is getting an item in a JSON object when the object isn’t found. Currently this is not an error, but just makes the “found” output false. I would rather get rid of the “found” output and make it an error instead.

    — James

  5. I also looked into “public domain” and it seemed problematic.

    I would be happy to get rid of the requirements on binaries (buried in some readme file that none ever reads). The attribution on source code, read by other developers, seems the only meaningful one, and this also create no burden on customers, since one just leaves the license in the code or documentation.

  6. To make the function faster and to avoid memory copy I use reference and controller/indicator is made "hade" on front panel to speed up.

    Your deeper problem is one of abstraction; your taking the “wire branches make copies” abstraction too literally. Actually, the compiler will only make copies when necessary. And even if copies are made, it will take a lot of copies to match the low performance of storing data in UI elements (which mostly comes from necessary thread-switching into the UI thread). A DVR is what you should actually have chosen to avoid copies, although the best choice may have been to just let the compiler worry about about it.

  7. I haven’t used this function, but I’ve done similar timing, and I find that both forms of timing ("since last", and “on schedule”) are both needed. There is no reason to prefer one over the other as default, and thus it would be simpler to stick with the current behavior as default, with the new one as an option.

  8. I’ve added a new version to the CR (sorry Ton, I will need to learn how to use Github).

    I added a JSON to Variant function. Note that I’m trying to introduce as much loose-typing as possible (very natural when going through a string intermediate), so the below example shows conversion between clusters who have many mismatched types, as well as different orders of elements. It would be nice to think about what type conversion should be allowed without throwing an error.

    post-18176-0-02849000-1351286878_thumb.p

    And I’ve started on a low-level set of Get/Set polymathic VIs for managing the conversion from JSON Scalers/Arrays to LabVIEW types (very similar to Shaun’s set but without the access by name array). I’ve reformatted two of Shaun’s VIs to be based of the new lower-level ones. The idea is to restrict the conversion logic (which at some point will have to deal with escaped characters, special logic for null (==NaN), Inf, UTF-8 conversion, allowed Timestamp formats, etc.) to be in only one clearly defined place. At some point, I will redo the Variant stuff to work off of these functions rather than relying on the OpenG String Palette as they do now).

    post-18176-0-73588200-1351287213.png

  9. That one person then becomes the lightening rod for NI in the event that the others want to sue.

    Uh, wait, what? Can I be sued for stuff I post on NI.com? Sounds like an argument to not post anything on NI.com. Ties back to my previous point about the NI.com Terms of Use containing disclaimers to protect NI but not posters; do I need to add a legal disclaimer to every post and uploaded sample VI?

    And if something is adopted into LabVIEW, it becomes NI’s responsibility, surely?

    Trim Whitespace is a prime example of a VI that cannot be BSD when it goes into the palettes... we can't require every user of LV to remember to thank the authors of that VI everytime they clean up a string.

    That’s why I like the “1 clause BSD”, dropping the binary requirement. The source code requirement is trivially satisfied by placing the license in the FP or BD or hidden away in the documentation.

    Edit added later: found this link: The Amazing Disappearing BSD License

    BTW to OpenG developers (JG if he’s reading): does OpenG not have some transfer of copyright to OpenG itself? It will be impossible to change licensing terms on OpenG once some of the authors die. I’d like to propose dropping the binary clause.

  10. So if I understand Stephen correctly one of the things witholding NI from forking the code from LAVAG (amongst others) is the BSD requirement to have the author in the license notes of a binary. I can understand the 'tight coupling' argument by Stephen (NI==LabVIEW). We could create a special version of the BSD that would remove the attribution requirement for binaries.

    I like Ton’s idea. And not just for NI; having to compile a list of all licenses to make available in an executable is a pain, especially as no one will ever read them. While providing a license in source code is easy if the license is already on the FP, or BD, or documentation of the VI’s themselves.

    Not all the libraries there will ever be desired to be picked up by LV, only the ones that seem to have broad appeal, but those few, it seems to me, should have some way to allow movement of these libraries from the CR into the primary distribution channel (i.e. LV Base) without creating licensing headaches for all involved. What that mechanism is, I have no idea.

    What about a test case? In OpenG there is a Trim Whitespace function that duplicates the same function in LabVIEW. Due to the work of several programmers, OpenG’s version has considerably higher performance. I don’t believe there is any other difference in function, no “advanced” features or anything, and thus no reason why the OpenG version shouldn’t be adopted as standard. What would it take to do this?

    — James

    • Like 1
  11. Crud. Didn't think of that... I was thinking that you had just offered suggestions that drjdpowell acted on... but now that I revisit the thread, you actually contributed code.

    My statement stands -- everyone has to act as they believe they can and should.

    I could, presumably, post only the core part involving the JSON classes which I wrote, leaving out Shaun’s polymorphic accessors. Or I could get Ton and Shaun’s permission to post the whole thing (maybe; how does that work as only one person can actually post it on NI?).

  12. My understanding is... that at any moment AQ is going to rightly point out that it is only some lawyer's understanding that actually matters, and there is really no understanding of that. :)

    For discussion, what about this unlicense:

    This is free and unencumbered software released into the public domain.

    Anyone is free to copy, modify, publish, use, compile, sell, ordistribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.

    In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.

    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OFMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OROTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    For more information, please refer to <http://unlicense.org/>

  13. On looking carefully at the BSD license the "JSON LabVIEW" is under, and at the "Terms of Use" governing posts on NI.com, I see that both have prominent legal liability disclaimers. BSD says that I'm not liable, while the Terms of Use only says that NI is not liable. Should I be concerned by that?

    Also, I could not see why I, along with any other copyright holders of "JSON LabVIEW" couldn't legally upload the code to NI.com and thus trigger NI's ability to use the code unhindered by the BSD (I also didn't see anything that indicated that posted code is now owned by NI; just that, by posting, I grant them the right to so whatever they want with it).

  14. Question:

    As the author of a work released under BSD or some other license, am I not able to re-release it under another, less restrictive license, such as one waiving any attribution requirements? And can I not make it "public domain" with no licensing restrictions at all?

    Also, as a side point. if I were a company that did not want to allow open-source code, I would certainly not allow arbitrary code posted to ni.com. That such code is legally owned by NI would be immaterial, as it is not tested or certified by NI. Only if such code became part of LabVIEW would it be acceptable.

    Added later: I should explain that I always assumed that some companies did not allow Open Source software because of concerns about the quality of that code. But if the real issue is attribution and keeping track of all the licenses that have to be reproduced, then that is different. Personally, I don't really care about personal attribution beyond perhaps a note in the code itself.

    -- James

  15. I hate software licensing rules. Really hate them.

    And I just don't understand them. I picked BSD because it seemed to be to be entirely permissive, except for an acknowledgment. It's not "copy left" which would prevent it from being used in a commercial product. Would making things "public domain" be any better?

    And though I understand why some companies may shy away from open-source software, preferring all code to come from "approved vendors", how does posting things on NI solve this issue?

    And for future knowledge, was it posting on the LAVA CR that creates the issue, or was it already tainted once we posted code in a conversation on LAVA?

    Anyway, this answers a question I've long had: Why does OpenG need a different "Trim Whitespace" VI; why doesn't NI just adopt the higher-performance version as standard LabVIEW?

    -- James

    We need a distribution system that is lightweighted and reliable

    Do you not like VI Package Manger?

    What should we do with NaN, -Inf, and +Inf? JSON does not support them. NaN could be null but the others I don't know.

    Official JSON sets those three values to 'null' however I lean to this idea. For numerics we should use 1e5000 though.

    I think some "JSON" implementations have (perhaps wrongly) allowed these

    values, so it is probably a good idea to accept things like "Inf", "Infinity", "NaN" when parsing in JSON.

    When writing JSON, a problem with the 1e5000 idea is that there is no defined size limit for JSON numbers; one could theoretically use it for arbitrarily large numbers. Not that I've ever needed 1e5000 :)

    Maybe there should be an input on "Flatten" that selects either strict JSON or allows NaN, Inf and -Inf as valid values.

    BTW, i'm on vacation without a LabVIEW machine so I'll comment on the excellent code additions when I get back.

  16. I would. There are a lot less "working" things in there already :) Alternatively. Start a new thread.

    Added to the CR.

    Indeed. I have to be very careful about dependencies. Some clients insist on "approved vendors" or "no 3rd Party/open source" and, for most of the stuff in OpenG that I would use; I have my own versions that I've built up over the years. It's just easier not to use it than get bogged down in lengthy approval processes.

    But won’t this package be 3rd party/open source? To everyone but us at least. And OpenG is a “Silver Add-on” on the LabVIEW Tools Network (oooh, shiny!).

  17. Ah, non-uniform spacing, I see.

    Greg,

    My first thought would have been to truncate the weighting calculation and fitting to only a region around the point where the weights are non negligible. Currently, the algorithm uses the entire data set in the calculation of each point even though most of the data has near zero weighting. For very large datasets this will be very significant.

    — James

    BTW> Sometimes it can be worth using interpolation to produce a uniform spacing of non-uniform data in order to be able to use more powerful analysis tools. Savitsky-Golay, for example, can be used to determine the first and higher-order derivatives of the data for use in things like peak identification.

  18. I would not be able to help you for a few weeks as I’m off on vacation. I can see why your smoothing functions so slow and I’m sure someone could easily improve it’s performance by orders of magnitude on large data sets. However, are you sure you would not be better served by using one of the many “Filter” VIs in LabVIEW? I tend to use the Savitsky-Golay filter, but there are many others that can be used for smoothing. They’ll be much, much faster.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.