Jump to content

drjdpowell

Members
  • Posts

    1,969
  • Joined

  • Last visited

  • Days Won

    172

Posts posted by drjdpowell

  1. 6 hours ago, smithd said:

    -Handle enums and timestamps and similar common types without being whiny about how its not in the standard (yet apparently changing the standard for DBL types was just fine).
    -Discover and read optional components (technically possible with lv api, but pretty wasteful and also gross). 
    -I love the lava api's pretty-print.

    I’d add:

    - Work on a stream (i.e. allow the JSON Value to be followed by something else, like the regular Flatten functions have a “Rest of String” output).

    - Give useful error messages that include where in the very long JSON text the parser had an issue.

    - Work with “sub”JSON.  Meaning, “I know there is an “Options” item, but it can come in multiple forms, so just return me that item as JSON so I can do more work on it (or pass it on to an application subcomponent that does know what the form is).

    The library I’m working on, JSONtext, is trying to be sort of an extension to the inbuilt JSON primitives that adds all these features.

    • Like 1
  2. 8 hours ago, ShaunR said:

    The other JSON library was just too slow for streaming Websockets and the NI primitive is as much use as a chocolate fireguard because it crashes out if anything isn't quite right. 

    One of the performance advantages of working directly with JSON strings is that, when converting JSON to/from a Variant, one can use the NI primitives to handle large numeric arrays, without ever letting it see (and throw a fit over) the entire string.  In fact, I can often pull out selected elements of a large JSON text faster than using the NI primitive’s “path” input (I think because the primitive insists on parsing the entire string for errors, while I don’t).  

  3. I’m working on a new JSON library that I hope will be much faster for my use cases.  It skips any intermediate representation (like the LVOOP objects in the LAVA JSON API, or the Variants in some other JSON toolkits) and works directly on JSON text strings.  I’d like to avoid making a tool just for myself, so I’d like to know what other people are using JSON for.  Is anyone using JSON for large(ish) data?  Application Config files?  Communication with remote non-LabVIEW programs?  Databases?

  4. 1 hour ago, Neil Pate said:

    One minor thing which I have not investigated the reason for yet is that you cannot resize the tree column widths.

    XControls don’t automatically save display properties of their contained controls; one has to explicitly save/load in special methods of the XControl.  A big pain, given the large number of properties.

  5. What toolkits do people use for accessing databases?   I’ve been using SQLite a lot, but now need to talk to a proper database server (MySQL or Postgres).  I’ve used the NI Database Connectivity Toolkit in the (far) past, but I know there are other options, such as using ADO.NET directly.

    What do people use for database connectivity?  What would you recommend?

    — James

  6. Speculative, to establish that the work is legitimate R&D of my company.  I want to be able to show a tax inspector a company-branded product on the LabVIEW Tools Network.

    BTW, I’ve found I can separate the new stuff into a separate, non-conflicting project, so LAVA JSON can be left as is (I’ll bring out a new version with some modest performance improvements).

  7. 1 hour ago, ShaunR said:

    What is the criteria for the Tax exemption that requires this move? What makes this toolkit a good choice over, say, you messaging library which has little or no collaborative contributions  so far?

    I’m recording my work on Messenger Library and all other reuse libraries, and hope to claim at least some of them also.   JSON just happens to be what I am working on right now.  

  8. JSON LabVIEW is BSD licensed.  It’s not “Copy Left”, so one doesn’t have to open-source any derivative work.  

    BTW, what I’m working on is a major change, that gets rid of the intermediate representation as LVOOP objects entirely.   The only representation is text.  So I may be able make it an independent package that can be used in parallel with the LAVA JSON library.

     

  9. 1 hour ago, ShaunR said:

    I've no idea what you are talking about (R&D Tax credits?)

    https://www.gov.uk/guidance/corporation-tax-research-and-development-tax-relief-for-small-and-medium-sized-enterprises

    1 hour ago, ShaunR said:

    I would prefer you create a new product derived from the current.to mark a distinct change rather than an attempt a seamless absorption. In time, people will probably move to your new product in preference  but I think a distinction should be made and the current project control handed over to LavaG admins.

    I was thinking of that as an option.   The LAVA community one would be version 1.4.1.  I’m afraid I’ve checked in some things starting with a switch to 2013; you should ignore those and branch from the last 2011 checkin.   I’ll set up a new repo and a new project name.  

  10. I’m doing a lot of work at the moment on a “JSON 2.0” version, particularly intended to improve performance in common use cases that I’ve encountered. For example, I have a new VI that flattens a Variant directly to JSON text (skipping the Objects entirely) that works on large arrays of numerics about 30 times faster.   

    I’d like to claim R&D tax credits on the work I do on this (which is otherwise unpaid), and to do that the result of the work needs to be owned by my limited company, so I’d like to release the new version under “JDP Science Limited”, rather than “LAVA”.   I would put a BSD license on the new version, and properly comply with the BSD license of the current LAVA version.  I hope none of the contributors of "JSON LabVIEW” object?   The R&D credit would work out to about 30% of charged rate (if I do the tax math right, which is far from obvious).  30% is a lot better than zero.

     

  11. 11 hours ago, MarkCG said:

    A very good way to take an N-order derivative of a noisy signal is the Savitski-Golay filter, which is in LabVIEW. Read up on it, you will see it is much better than the naive method of taking the derivative (the simple forward, backward, and central approximations)

    I can also recommend Savitski-Golay, which is basically just fitting to a Nth-order polynomial, and estimates multiple derivatives at once.  Apply some “straight-line” criteria such as all derivatives beyond the first being “small”.

  12. 1 hour ago, ShaunR said:

    There are no rules (except self imposed ones) that say the data has to be an integral part of a message. 

    Actually, that is an Actor Model rule ("messages should be immutable").  It’s a rule I sometimes break, but not without pause for thought.  Where I have broken the rule I have sometimes had race-condition bugs.  The value of rules is not in blindly obeying them, but in understanding why one should be reluctant to break them.

  13. 17 hours ago, pawhan11 said:

    I think sometimes globals or DVRs will be more suitable  than messages. For example when we have a large buffer data points that some process is acquiring and stores in memory, others will use that data. By using globals/dvrs it is just basically set and get. Using messages involves flattening data to variant/string in order pass that data by message implementation. With large data this delay might be significant.

    Variants don’t “flatten”.  Putting something in a variant doesn’t involve altering or copying the data.  They have overhead but I don’t think the size of the data matters.

    Never use Globals for big data.  Globals always copy when you read them.  So does any “Get”.   Avoid a copy by extracting only the required data inside the structure you are using.  So, inside the IPE with a DVR, inside the Action Engine, or inside the message sending code.  

    • Like 1
  14. 9 hours ago, JKSH said:

    I still use AEs for very simple event logging. My AE consists of 2 actions:

    1. "Init" where I wire in the log directory and an optional refnum for a string indicator (which acts as the "session log" output console)
    2. "Log" (the default action) where I wire in the message to log.

    Then, all I have to do is plop down the VI and wire in the log message from anywhere in my application. The AE takes care of timestamping, log file rotation, and scolling the string indicator (if specified)

    Having more than 2 actions or 3 inputs makes AE not that nice to use -- I find myself having to pause and remember which inputs/outputs are used with which actions. I've seen a scripting tool on LAVA that generates named VIs to wrap the AE, but that means it's no longer a simple AE.

    Make it that simple and perhaps I do use Action Engines.  Turn the enum into an “(Re)Initialize” boolean, or have the code initialize itself on first call, and you’ll find such things in my code.  But none of them will be usable to communicate between code modules like AEs are often used.  Logging is a good example of where multiple things can share something without affecting each other.  

  15. On September 10, 2016 at 8:35 AM, ShaunR said:

    Well. there is no "Total bytes downloaded" emitted by the downloaders ...

    I must not be understanding the problem you’re presenting.   Perhaps you could explain a AE/DVR solution, and illustrate how it cannot be done by messages.  

    On September 10, 2016 at 8:35 AM, ShaunR said:

    I'm not sure why Dispatcher is thrown in here. That isn't a messaging framework or a memory accessor or really anything close to what we are discussing. Dispatcher is some turbocharged TCPIP primitives for a specific purpose (pub/sub). It has more in common with network streams than a messaging framework. It is meant to be a component in your system not the system itself. I suppose when an actor is just code that does something and messaging is just information flow then even a simple API becomes a "Messaging Framework". I don't subscribe to the philosophy of tortured vocabulary, though.

    If you are looking for my "Messaging Framework" then you need to look at the VIM Demo where it is one VI @ ~700KB. The merits of each messaging approach is irrelevant in his discussion though. We should instead concentrate on the OPs question.

    You mention Daklu, who only has LapDog as a reuse library.  LapDog is less a messaging framework that your Dispatcher.   Dispatcher is an implementation of a “Message Broker” messaging pattern, I believe, so one is making more of a design choice by using it than one would be by using LapDog.  My “Messenger Library” is meant as a flexible message passing library, where you don’t need to use the “actor” stuff at all if you’d rather not (most of the examples installed with Messenger Library are simple message passing).  Its central messaging pattern is “Request-Reply”.  The Actor Framework, on the other hand, is very much intended to be a framework to enforce a certain style of program architecture.  One should (I hope) find Messenger Library very useful even if one doesn’t follow similar design principles to me, but it is (deliberately) hard to use the AF in a way not intended by AQ. 

    Perhaps, though, you actually meant “principles” rather than “framework”, as Daklu has written a lot about the “actor-oriented” design principles he follows.

  16. 1 hour ago, ShaunR said:

    This is solvable with some sort of protected global storage (a DB, an AE, a DVR, a Go Pro) but not by pure messaging alone.

    Your kidding, right?  Just register on “Total bytes downloaded” and look at the increase (which stops when they finish).  Trivial.

    Edit>> or register a Queue for all the “Throughput” messages and average once every 10 secs.   Also easy.  And are you really worried about sub-millisecond timing uncertainties on a 10-second average?  If so, I think you have those with an AE also, just due to OS jitter.

    1 hour ago, ShaunR said:

    But it is not learning "messaging". It is learning someone's particular flavour of framework Your's, AQs, Michaels, Daklus.

    Don’t forget your “Dispatcher".

  17. 23 minutes ago, ShaunR said:

    An action engine to do the averaging over N channels is about 2 minutes to produce one VI (~2k), that can be explained to a novice in 30 seconds regardless of the architecture you use. 

    A class with a DVR is about 5 minutes, 3 or 4 Vis (about 30K) and you have to promise there will be only one (but that's probably OK for Niel) and could be explained to a novice in 30 secs IF  he was OK with classes to begin with.

    What's your overhead and how long until a novice would know it "very well"?

    Well, the simplest way to do that would be to Register Notifiers (well, NotifierMessengers) for the info one wants and then For-loop over the Notifiers to get your snapshot.  I’m not that fast at coding but it’s no faster for me to make an AE.  And don’t you have to modify all your data sources to add calls to this new AE?  I don’t have to modify any of the data-producing actors, my actors aren’t code-coupled via this subVI and can be reused as is, and some of my actors can be on remote systems (try that last one with your action engine!).  

    How long would it take for a novice?   I’m not as positive on the ability of novices to understand action engines (they seem to love Local variables, from my experience) but learning messaging patterns is more of a learning curve.  But learning is an investment.  Learning messaging is well worth it.

  18. 1 hour ago, Neil Pate said:

    I used to do things like this, where the data was in a message itself. However I found maintenance of the user events became too much of a pain. Now I just have simple "data changed" events, and then the thing listening polls the data it care about from the global data store.

    My actors themselves are all 100% ByVal, I just use the data store as the mechanism for access the data across processes

    I have one User Event per “actor”, that basically carry Text-Variant messages.  No maintenance at all.  The main weakness of polling on “data changed” is that you can miss updates.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.