Jump to content

drjdpowell

Members
  • Posts

    1,973
  • Joined

  • Last visited

  • Days Won

    178

Everything posted by drjdpowell

  1. I’ve been benchmarking it (by just running a “Bind” in a loop), and using the path adds about 25 nanoseconds per CLN. Haven’t figured out yet why my code seems to be slower than Shaun’s (hope it’s not the LVOOP ).
  2. A quick test using my “Example1” shows that I can INSERT 100,000 points, each involving 4 calls with a diagram path, in 0.75 seconds (this time does not include the “COMMIT” to disk). That’s less than 2 microseconds per CLN. So the overhead of the diagram path can’t be that much. Though if it is a significant fraction of the 2 microseconds, then it will be good to eventually get rid off it. — James Added later: I had a look at ShaunR’s “SQLite_Speed Example.vi” which INSERTs pairs of strings: he can INSERT 100,000 in 0.36 seconds, half my time. So perhaps I will look into statically specifying the library. Wish I could specify it in one place, though. One thing a User might want to do is have a different SQLite3 version (compiled with different options, for example) for different applications, and statically specifying the library for each CLN makes that problematic. Is there any way to specify the path at runtime, but do it only once? Or at compile time, but specify it in only one place?
  3. Hi Matt, thanks for bringing your experience to this. It was my feeling that there is no clean way to directly connect SQLIte3’s loose typing system with LabVIEW variants. One could make a system similar to the OpenG Variant Config VIs, where one inputs a cluster to define the datatypes to read in, but a straight “Get Column as Variant” seems to have too many gotchas to be worth it. If one did want to store arbitrary LabVIEW datatypes in SQLite, one could just flatten the data and store as BLOB, but I thought that option could be left outside the scope of the package. Please do. I have wondered if it is a good idea to make functions like Step or Finalize also available as Property nodes, as that would allow more compact code in many cases (though as these functions aren’t really “properties” that might be confusing). Is that true? I wouldn’t have thought that, but I have never tested it. The advantage of passing the dll path is that one can alter it easily. Do you have an performance data with your system that I could compare to? I realized this after I did it. But I don’t want to introduce “pointers” into any public API function like “Prepare”. I am considering making an alternate, private version of “Prepare” that uses a pointer in this way to allow higher performance in VIs like “Execute SQL". On the “to do” list. Slightly tricky because of the issue of needed mutexes described in the documentation: "When the serialized threading mode is in use, it might be the case that a second error occurs on a separate thread in between the time of the first error and the call to these interfaces. When that happens, the second error will be reported since these interfaces always report the most recent result. To avoid this, each thread can obtain exclusive use of the database connection D by invoking sqlite3_mutex_enter(sqlite3_db_mutex(D)) before beginning to use D and invokingsqlite3_mutex_leave(sqlite3_db_mutex(D)) after all calls to the interfaces listed here are completed." — James
  4. View File SQLite Library Introductory video now available on YouTube: Intro to SQLite in LabVIEW SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely. There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them. See the original conversation on this here. Hosted on the NI LabVIEW Tools Network. JDP Science Tools group on NI.com. ***Requires VIPM 2017 or later for install.*** Submitter drjdpowell Submitted 06/19/2012 Category Database & File IO LabVIEW Version 2013 License Type BSD (Most common)
  5. Congrats. Thought you seemed busy. I have a 5-week old daughter myself. Katy. Only got the one, though; didn’t buy in bulk.
  6. I was just going to use the Timeout, which would throw an error message. Simpler to throw the error message downstream to the Consumer. One could add another input for a queue to send the error messages, but I’m thinking of going the simple route. If the Consumer is a standard Actor design of mine, it will publish received error messages, and Requestor can register for error messages if it wants them. I have a “Cancel Future” VI that can be applied to invalidate the future tokens if one needs this. This immediately causes the helper to error out and shutdown, having the same effect as an “Exit” message. If the VI hierarchy that created the futures goes idle, that will also invalidate the queue references inside the futures and shut the helper down. So “Exit” functionality is already there if you want it and there is an automatic exit feature. Otherwise there is the timeout. But the helper is reusable. Once it works, I don’t care how complex it is internally because no-one needs to look inside it. And I only have to write it once; “Requestor” is code that needs to be written for each application. Instead of internal complexity, I care about the clarity and simplicity of the API. I had ment to ask you if your framework supports replies to messages. I would imagine it would if your messages are of the form “Target->Sender…” and can easily be reversed. But can your dispatcher gather replies into ordered groups?
  7. I like LVOOP and use it all the time, but using a non-reentrant Vi as a gatekeeper on the resource seemed the obvious first choice to me. Rolfk’s FG solution looks quite clear to me. — James
  8. So I took the time to actually do it. Reworked the prototype “Futures” implementation I mentioned at the start of this conversation so that it had a helper actor. The above code implements this diagram (though I didn’t make the “Requestor” a message handler, it could be): Note the random delays in the three Actors; the reply messages are sent in arbitrary order, yet the set of messages received by the Consumer are always ordered A, B, C. The “helper actor” (not really a full actor, just an async subVI) is quite simple (though I have yet to complete error handling): “Redeem Future Tokens.vi” both waits for the futures to be filled, and destroys the Future Token (internally, the Future is a single-element queue). This deliberately makes it impossible to use polling on the Future. — James
  9. I was investigating the resources used in loading large numbers of subVI clones a month ago. There were 5 handles opened per clone (for a clone with no meaningful code in it other than a short wait). Edit later: actually it’s 3 handles.
  10. I’ve added this package to the code repository. The only major change since the last is getting it to properly include the SQLite3 dll in executables.
  11. Version 1.11.3

    9,033 downloads

    Introductory video now available on YouTube: Intro to SQLite in LabVIEW SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely. There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them. See the original conversation on this here. Hosted on the NI LabVIEW Tools Network. JDP Science Tools group on NI.com. ***Requires VIPM 2017 or later for install.***
  12. Yeah, but how useful is that in LabVIEW? The basic use for “futures” in step-by-step text languages is very similar to the dataflow already present in LabVIEW. Only once we’re talking about message-handling loops does a “future” become interesting, and in that case it’s hard to see how useful they are when we’re already using asynchronous messaging. In your example application, it’s only the fact that you need multiple TransformData messages for only one DoCalc message that make the futures solution interesting. It’s that you can pass an array of futures to DoCalc, and thus gather your four separate TransformData responses together that is something you can’t otherwise do as easily. Not really. The helper actor I’m thinking of would be fully generic and reusable; it would be dynamically launched and configured with an array of Futures and index over them to get the array of messages. It’s API would be very simple. I noticed that your futures were very similar to the “message reply” system I use. I attach a “return address” to the message, and you attach the future. Both allow the direction of responses to arbitrary recipients. Though with futures, the recipient has to be written to specifically handle futures, while with replies it’s just an ordinary message. — James
  13. I was thinking about this a while last night, and I wondered if the real value of “futures” is in defining an ordered grouping of otherwise independent asynchronous messages. Imagine, for example, that one process needs to make requests of several other processes, with responses to these requests being dealt with all at once. The problem here is that the Response messages come individually and in any time order, meaning that “Consumer” needs to have logic to identify and store the messages, and determine when all are available. The advantage of using an array of Futures here (passed between Requestor and Consumer) is the very fact that it is an array; it is grouped and has a defined order. Thus Consumer need only index out the elements of this array and need not have any complex logic. The array of Futures serves to predefine a grouping of multiple asynchronous messages that have yet to be sent. As is, the Futures have the downside of requiring potential blocking or polling in Consumer. However, this can be avoided by using a small helper process that is dedicated to waiting on the array of Futures and forwarding the resulting array of massages: Note that the “Wait on Responses” Actor is serving to group and order the messages, before passing them to the Consumer. Requestor makes a set of requests, and Consumer receives a corresponding set of responses. — James
  14. One could also have a reusable process that waits on an array of futures, then forwards the results to the process that needs them. Then that simple process (that has nothing else it needs to do) would do the blocking. And it would have a timeout, of course, after which it would send an error message.
  15. Hi Daklu, A comment: If your “Model” was a complex, multi-loop construct like your last post diagram, it is possible that you might put your future-filling logic (“TransformData”) in a different loop than the Future-redeeming logic (“DoCalc”). It would then be possible for the future to be redeemed before it is filled, which for your DVR design would return default data, followed by an “invalid DVR" error message from “TransformData". A “future” based on a Notifier would instead just block momentarily if this happened, and would be a much more widely applicable construct because of that. Your DVR future can only be used in cases where it is filled and redeemed in the same loop, or can otherwise be assured it is filled before redeemed. — James
  16. If something takes 10ms and one delays blocking for 11 ms, then one has avoided blocking all together. I hadn’t appreciated, though, that you are filling your futures in the same message handler that is redeeming them, and thus in your case there is no possibility of ever actually blocking on the redeeming of the futures. Clever, and I can’t think of a cleaner way of doing it.
  17. A “round robin message” would work, but would be serial, rather than parallel. And I suspect a “Wait on all Futures” actor would be just as simple. I suspect we all have somewhat different ideas about what “futures” might be. My first reading on futures was some webpage (which I can’t find again) that gave a pseudocode example like this: future A=FuncA() future B=FuncB() …do something else... C = FuncC(A,B) Here FuncA and FuncB run in parallel, and the code blocks at the first use the results. Note that we can already do this kind of stuff in LabVIEW due to dataflow.
  18. Thoughts on Futures, as I understand them (and without reexamining Daklu’s implementation): 1) Isn’t the point of futures to block (not poll), just like synchronous operations, but to delay the block until the information is needed, rather than when it is requested? It’s “lazy blocking”. And very similar to standard LabVIEW data flow (blocking till all inputs available). 2) One use of Futures I can think of is if I wish to request information from several processes, and perform some action only when I receive all replies. I can send all the requests and pass the array of futures to a spawned “Wait on all Futures” process/actor that sends a single bundled-reply message back to the original process when all the futures are filled. This would be much easier than having to record each reply and checking to see if I have all of them. — James
  19. It will only work with by-value objects in limited cases. Every wire branch leads to a separate object. For example, in your last attachment, you have an “ImplementorInit” VI that returns five entirely independent “Implementer” objects; two in Child 1(at parent and child levels), two in Child 2, and an overall object that holds Children 1 and 2. If these were five references to a single by-ref object then you would be able to work on that object from any of your methods. But by-value you are working with different objects; changing one has no effect on the others. In the code I posted, I’m trying to keep all the by-value objects together, with no copies, so any method in one interface can call methods on any or all of the over interfaces. Child2 can access and modify the Child1. Child1 can access and modify Child2. — James BTW: Daklu has an interface framework in the code repository. He uses a by-ref object (using a DVR).
  20. Are your examples meant to involve by-value or by-reference objects? I can see how they work for by-reference objects, but they wouldn’t work with by-value objects (since your making copies at every wire branch).
  21. It wasn’t/isn’t clear to me what this idea is asking for that the conditional disable structure doesn’t already do. Could you give an example?
  22. Interesting. I didn’t know N2O had enough greenhouse potential to be an issue despite it’s small concentration. But it’s still isn’t significant, as the greenhouse effect of the N2O from vehicles is only a bit over 1% of the effect of CO2 from the same vehicles (if I read this correctly).
  23. An experimental modification: Run “Example B.vi" Interface B.zip
  24. The noxious gases that the catalytic converter works on are only a very small fraction of the gases produced, so it doesn’t make a meaningful difference. Also, the sun’s UV light will eventually complete the oxidation of the carbon in those gases to CO2. Too slowly to affect ground-level pollution, but fast on a timescale of climate change.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.