Jump to content

drjdpowell

Members
  • Posts

    1,973
  • Joined

  • Last visited

  • Days Won

    178

Everything posted by drjdpowell

  1. OK, but why do you you call the Init.vi in a loop over an array of objects? Why not take the individual objects, call Init on them, then call the Parameter setting VI, and only then put them in an array. Like this (modifying your “old style” example): Here “Init.vi” can be dynamically dispatched, while “Write String.vi” can be a unique method for each child class. You could even branch the wire and reuse the preinitialized object with different parameters. Basically I don’t understand the reasoning behind your initial example, where you put the objects in a array to call Init in a loop, then strip them all out again to treat them individually, then put them all back in an array again. Seems pointlessly over complicated. — James (sorry for the late reply, I’ve been on holiday without internet access)
  2. If you’d like to stay by-value and avoid the DVRs, you can instead keep your objects in an array, and store the object’s array index in the variant attribute look up tables.
  3. Why are you using a dynamically-dispatched “Init” method at all? Why not just have a “Create <commandname>” specific method in each command class that initializes it (with specific parameter inputs) and then send it off to be executed?
  4. He’s using a Property Node: Right-click>>Create>>Property Node>>Value Should exist in 8.0. You could also use a local variable (Right-click>>Create>>Local Variable)
  5. Other than for TCP and UDP, how many different “Is Valid” methods can you possibly need? Note that you can make a method “protected”, which would allow your base class to call child-class methods without making the methods public. Also note this conversation and consider if you really need an “Is Valid” method.
  6. I wish it were easier to change namespacing once code is used in multiple projects. Or if it were possible to define a “display name” and a separate unique identifier that one could leave unchanged. So my “Actor” class could secretly be “47D44584FGHT” or whatever. No collisions then.
  7. The actual parent-class data of the object. An object of a child class is a collection of clusters, one for each level of class hierarchy. You can call any parent-class method on any child-class object. Note that I am being careful to call it a “child-class object”, not a “child object”, as the whole “parent/child” metaphor really only applies to classes. No actual object is a “child” of any other object.
  8. Seven months later and I’m working on getting my reuse messaging code in a VIPM package. And another issue occurs to me. I now have most of my classes outside of any lvlib library, which prevents unnecessary loading. However, that means I have rather generic class names like “ErrorMessage”, “Messenger”, and “Actor” with no further namespacing that I could easily imagine another package using. What do other people do about this potential conflict? — James
  9. In the end I decided to add a “Text Encoding” property to the "SQL Statement” class, with choices of UTF-8, system (converted to UTF-8 with the primitives Shaun linked to), and UTF-16. System is the default choice. I also added the system-to-UTF-8 conversion primitives on all things like SQL text or database names (thanks Shaun). I also used the sqlite3_errmsg text to give more useful errors (thanks Matt).
  10. Are you being sure to throw away any bits > 25 when the encoder rolls over? The code below should give the correct U25 difference even though it’s using U32’s: Added later: actually, the fact that the “random values” alternate with the good ones suggests some other problem, unless you are reading the encoder about two times a revolution.
  11. The heartbeat can be built in to your reusable TCP component (I’ve considered adding it to my background processes running the TCP connection). You don’t have to make them part of the actor code itself.
  12. I was a reaching a bit with the “any actor” thing. Though my actors tend to be a quite “server” like already, much more than your “SlaveLoops” or the Actor Framework “Actors”. They publish information and/or reply to incoming messages, and don’t have an “output” or “caller” queue or any hardwired 1:1 relationship with an actor at a higher level. They serve clients. The higher-level code that launches them is usually the main, and often only, client, but this isn’t hardcoded in. Thus, they are much more suitable to be sitting behind a TCP server. — James
  13. Well, my current TCPMessenger is set up as a TCP Server, and is really only suitable for an Actor that itself acts as a server: an independent process that waits for Clients to connect. A break in the connection throws an error in the client, but the server continues waiting for new connections. This is the behavior I have in the past needed, where I had Real-Time controllers that must continue operating even when the UI computer goes down. However, most of my actors (in non-network, single-app use) are intended to shutdown if their launching code/actor shutdown for any reason. For this I have the communication reference (queue, mostly) be created in the caller, so it goes invalid if the caller quits, which triggers shutdown as in your case 1. Case 2 doesn’t apply in my system as there is no output queue. Now, if I wanted auto-shutdown behavior in a remote actor, then I would probably need to make a different type of TCPMessenger that worked more like Network Streams than a TCP server. So a break in the connection is an error on both ends, and the remote actor is triggered to shutdown. — James
  14. That’s the route I took in adding TCP message communication to my system, with “loop D” being a reusable component dynamically launched in the background. Incoming TCP messages are placed on a local queue, which can also have local messages placed on it directly. Because “TCPMessenger” is a child of “QueueMessenger”, I can substitute it into any pre-existing component that uses QueueMessenger, making that component available on the network without modifying it internally. — James
  15. I use variants mostly for simple values; to avoid having 101 different simple message types. I have four or five polymorphic VIs for simple messages, and having so many different message types would be unmanageable. But for any complex message, I usually have a custom message class. It’s no more complex to make a new message class then make a typedef cluster to go inside a variant message. — James I’m hoping to work up to testing my object messages on an sbRIO next week. I will be sure to run a memory-leak test.
  16. My limited testing was mostly functional (and over a slow-uploading home broadband connection) so I can’t answer for throughput. My messages vary in size, but common small ones have at least two objects in them (the message itself and a “reply address” in the message) and so flatten into quite a long string: a “Hello World” message is 75 bytes. I wrote a custom flattening function for the more common classes of message which greatly reduced that size. Also, for larger messages that can involve several objects I found that ZLIB compression (OpenG) works quite well, due to all the repeated characters like “.lvclass” and long strings of mostly zeroes in the class version info. I use “Flatten to String”, which works great communicating between LabVIEW instances. If you need one end to be non-LabVIEW then you’ll want something like XML or JSON. — James
  17. I believe AMC uses UDP, instead of TCP, for network communication. Depending on what you are doing, you might require the lossless nature of TCP. You should also have a look at ShaunR’s “Transport” and “Dispatcher” packages in the Code Repository; they do TCP. I’ve done some (limited) testing of sending objects over the network (not Lapdog, but very similar), and the only concern I had was the somewhat verbose nature of flattened objects. I used the OpenG Zip tools to compress my larger messages and that worked quite well.
  18. An update on the use of the library path in the CLN node: I found through testing that some of my subVIs ran considerably slower than others, and eventually identified that it was do to details of how the class wire (from which the library path is unbundled) is treated. Basically, a subVI in parallel to the CLN node (i.e., not forced by dataflow to occur after it) would cause the slowdown. I suspect some magic in the compiler allows it to identify that the path has not changed as it was passed through several class methods and back through a shift register, and this magic was disturbed by the parallel call. This being a subtle effect, which future modifiers may not be aware off, I’ve rewritten the package to use In-Place-Elements to access the library, thus discouraging parallel use. I’m considering having multiple “Bind Text” versions: "Bind Text (UTF8)”, “Bind Text (UTF16)” (might as well add UTF16 as SQLIte supports it), and “Bind Text (system)” or something like that. And corresponding “Get Column” versions.
  19. What’s the purpose of your “GUI manager”? Personally, I tend to keep all control manipulation on the block diagram of the VI containing the control. If I want a UI separated from the rest of the program I use messaging between VIs. The messages carry information, not control references. If the “GUI manager” is supposed to provide a loose coupling between program and UI (i.e. if you want to be able to substitute a different UI) then you need to channel all UI actions through it. But you’d also need your UI in a different VI from the main code (so you could swap it out) which would prevent you writing to control terminals or local variables. So I don’t really understand the purpose of “GUI Manager”.
  20. I just noticed that, although the OpenG version of “Wait (ms)” has an optional input for an Occurrence to use to Abort the wait, the complimentary version of "Wait Until Next ms Multiple” does not. I suggest modifying this VI to also accept an optional Abort Occurrence. Here is a modified version I just made: Modified Wait Until Next ms Multiple__ogtk.vi — James
  21. Yes, but can I be certain that the string the User provides is actually meant to be interpreted in LabVIEW's standard encoding? Strings can be anything; LabVIEW only really applies an encoding for display purposes. The User could already be working with UTF-8 or any other encoding, and applying the so-called “String-to-UTF8” function would scramble that.
  22. Hi Shaun, I agree (which is why I hadn’t spent much time on benchmarking till recently). Getting a working implementation in OpenG is more important, as optimization can happen later. And SQLite is very valuable even when less than 100% optimized; I’ve written a couple of applications with it so far and the speed is not an issue. I’m not sure I want to make that conversion an implicit part of the API. Users may want the full UTF-8 (which I don’t think is recoverable once it goes through "UTF8 to String”). And if they are using regular LabVIEW text (ANSI, I think) then it is a subset of UTF-8. I think it is better to document that the SQLite character encoding is UTF-8 and that ANSI is a subset, and let the User deal with any issues explicitly. Perhaps I should include those conversion primitives in the palettes. — James
  23. I’m thinking about interoperability with other programs (admittedly, not the most common use case) that don’t use flattened NaN’s and the like. I’m surprised that would be faster than the MoveBlock method (but I’ve not benchmarked it). Good catch; I don’t know why I didn’t use Byte Array to String there. I think you’re right. I’ll get the error message added. Thanks, — James
  24. What I mean is, if you take your null variants from variant mode and try to cast them to a number, the “Variant to Data” node will throw an error. Your other two modes specify the type when getting the column, as mine does, allowing SQLite to do the Null conversion. Yeah, but why does SQLite, which is very economical in numbers of types, bother making separate types for TEXT and BLOB? Must make a significant difference somewhere. Remember, I want to remain compatible with non-LabVIEW programs, which may have there own constraints on valid TEXT data. Binary data is NOT valid UTF-8 or UTF-16 data. I do have an eye towards eventually implementing BLOB I/O. Another difference between TEXT and BLOB are the collation functions and sort order. Could you explain this? I don’t see how /0’s have any effect. I extract TEXT or BLOBS as strings with this code, which is unaffected by /0’s: A good point about the Statement, but a User could be running multiple statements from the same connection. I only need to lock the connection from function execution to query of the extended error code in order to be sure I get the correct code. To get in OpenG I have to be 2009 compatible, which means no inlining. And I think OpenG frowns on advanced optimizations (or even turning off debugging) so I may be stuck here. — James
  25. If I’m imagining it right, your package goes the route of getting SQLite to serve as a LabVIEW data-type repository. I would guess you could abstract away the details of the SQLite loose typing system from the User entirely, making it simpler to learn. I went a different route of defining an explicit boundary between the two type systems, with the User dealing directly with the SQLite types. So my BIND VIs refer to SQLite3 types (Text, Blob, Real, Integer) while my GET COLUMN VIs refer to LabVIEW types (string, DBL, I64). On the its side of the boundary, I let SQLite store things as it wants, including storing empty strings or NaNs and NULL. This, I think, is an advantage if I were to need to interact with an SQLite Database created by another program; can your package handle reading data from columns where some rows are NULL? How do you handle the fact that LabVIEW strings can represent either ANSI text or binary data? The former maps to SQLite_TEXT, while the later maps to SQLite_BLOB. Do you store all strings as TEXT? Mine is a lower-level approach, I think, which has tradeoffs of greater flexibility but also greater required knowledge to use. Fortunately SQLite has excellent documentation. I need to recheck my benchmark, as 25 ns seems unbelievably fast even to me. I’m trying to get this into OpenG, though, and don’t want rare race conditions. I could get around this issue using DVRs or some other lock, but that’s some effort, so I’m putting it off until I fully understand all issues. I realized a problem in using my Example1 as a benchmark. Fixing that, I’m still slower than Shaun by about 40%. I need to see if I can improve that. — James In the context help for the “Specify path..” checkbox, there is a “Tip” that seems to indicate the ability to unload a path-referenced dll:
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.