Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Everything posted by ShaunR

  1. I wouldn't worry too much about performance to begin with. Getting everything mapped out and functioning is (IMHO) more important since the optimisation does not prevent it's use and can take a while due to it being an iterative process (this can be achieved with each stable release). If you are looking at making it directly compatible with other apps for viewing, you will need to insert using the "string to UTF8" and recover using the "UTF8 To String" vis as the methods Matt and I use do not honor this. UTF8 Conversion Because to use the Moveblock you have to use 3 API calls rather than one (Get the pointer, the size and then Move it). That's not the reason
  2. Interesting. So your SIF is "untyping" and "re-typing" using strings also. Not sure what the "Culture" is for since file formats are locale agnostic. Is this to cater for decimal points and time? I'm also not sure of the need for a "Strategy" interface unless it is just from a purist point of view. After all. If you wire an object to the Serialize class you want it saved right away before you read it again, right? Perhaps you can expand on the use case for this? I think the only real difference from what "you would like to see" and what I was envisioning is that the SIF Converter would actually be one of the Formats (JSON probably if it were up to me) meaning that the "Formatter" coverts from JSON to the others (they override the default). However, that is an implementation specific aspect so as not to re-invent the wheel and there is no reason why it cannot be a propriety syntax I suppose one other difference is that I would probably not have the "Human Readable" interface and each file format (binary, JSON, XML et. al.) would have a discrete "Formatter" for it's implementation. In this way, different file formats have a unified interface (JSON in my example) and the formatter/file saving is a self-contained plug-in that you just pop in the directory/lib
  3. Any USB device is going to be limited in it's current capability (both sourcing and sinking) and usually only 5v - You didn't say which relays (5v/12v/24v). You are much better going for a PCI solution such as the NI-PCI 6517 which will operate 12v and 24v directly without intermediary hardware (32x125mA max or 425 mA per single activated relay). You'll also have more than enough current headroom to add LEDs that can burn retinas at 100 paces If it is a 5V relay, you can still use the same card, but you may have to put a resistor in-line to drop the lower (off) threshold depending on the relay. Most of the time you can get away without this however.
  4. You might want to take a look at Dispatcher in the CR which may give you most of what you want.
  5. Well. To be fair, Rolf was on the trail with a lot less info than I (gotta be worth a "like" or two).. His next post would have been the same once he saw your files.
  6. OK. Reinstalling the GCC compiler with the latest version fixed the spurious references (0 bad, 6 stubs). I'm set to go for the next step. Woot.
  7. Multiply your array values by 255/Z Amplitude for displaying. [e.g Value * Round(255/1.76859)]
  8. No. The SQLite amalgamation code is untouched. All differences are via the in-built defines or features. Yes you can just drop in the sqlite3.dll (renaming it of course). But there are features enabled that, are not enabled by default from the sqlite.org dll (foreign keys for example). The dll available from sqlite.org also doesn't support encryption. Up until now. I've just recompiled the binaries with the latest version when I released a new API version. That was fine whilst the API was fluid since my updates where faster than the sqlite ones. Now, however, the API is updated far less often than the binaries and since they are distributed as part of the API package and installer, I don't really want to update the API version just because there are new binaries. It's been coming for a while. I just need to get off my arse and do it They probably use a x-compiler from linux (really must get me one of those ). So what you are saying makes sense. I'll do a grep on the Mingw source/includes and see what turns up now I know where to look.
  9. Bloody hell. They've released 3.7.14 already?I'm definitely going to have to separate out the binaries from LV source (which I've been thinking about for a while now). I'm looking at the dll shipped with the SQLite API For LabVIEW (for obvious reasons). I too cannot find any references to them in the source and, the DLL Dependency walker says they are not bound. Still. The LV 2011 SP1 DLL Checker says they are there. If I run the DLL checker on SQLite.orgs DLL they are not listed (as you have shown), but if I build the dll from the source amalgamation (I use GCC under MingW) they appear. It isn't a straight comparison, however, since the API DLL has encryption and different compiler settings. It'll take me a while to incrementally build everything back up from the original amalgamation to where it is now to identify the "bit" that drags it in. It may even be the compiler dragging it in. Maybe. One step at a time. I'm not too worried about file system support since there are a plethora of methods built in to the SQLite API. Especially now that they support Windows 8 RT and WinCE (note that the RT stand for run-time not real time as I first thought...lol) . They have implemented #define switches for the differing WIN APIs. There is bound to be something but hell, "It's only software"
  10. I agree. SQLite is made for these systems. With my recent success with compiling for the Mac (and because a couple of people asked me about it recently), I decided to look into it. VxWorks is a long way off due to a lack of hardware for development and testing. However, I have an old copy of the Pharlap ETS so that is "supportable" in theory. First step though is to get the unsupported kernel calls addressed and I have had some success with that. I'm now down to 3 (from 13). Can't for the life of me find where the AddAtomA, FindAtomA and GetAtomNameA are referenced so it must be an indirect dependency. I know it's a while ago this was posted,and you succeeded in compiling for Pharlap. Have you looked at this more recently?
  11. I'm not really understanding then since the charts/graphs display as many pots as you wire to them.
  12. The property node Legend>>Number of Rows?
  13. Attach your your code (source and compiled). Our crystal balls are not working at the moment.
  14. If you also route the "requestor" requests through the "Wait on Responses" (no need for your dotted line then) the you end up with the "Dispatcher" that I've been describing.
  15. Indeed. In fact. The only safe way is to have a specific Destroy (I'll be eagerly looking at the solutions here, since I have not found one other than the wrapper). If you use a ref counter and clean up when the count reaches zero (within the modules). What happens if the modules get called sequentially? The first module will create the queue, release it and therefore destroy the data since the counter is now 0 (no other modules have obtained a ref). When the next module in the sequence needs it it will be gone. This is the race condition that occurs when they are all running asynchronously.
  16. I believe this may break the license agreement. Added my own emphasis
  17. Then you don't need a count because you know there is always 1 Queue reference. The data (the .NET reference) won't be destroyed until you explicitly call the "Destroy" method in the wrapper. The only thing each call destroys is the Queue reference and then only if the queue already exists (so during execution of the wrapper the queue ref count is 2). This means that the Queue wrapper can be called from anywhere in the app,by any modules, in any order and will always maintain the queue with a single reference (which keeps the data alive). Your .NET reference will be fine but you won't have to count anything or worry about releasing anything as the Queue reference is created and destroyed, on demand, without destroying the data.
  18. This is why the UI Controller is separate in my architecture. The UI itself is just sends messages (it doesn't generally receive them). The state machines (or whatever) are contained in the UI controller It gets a bit icky at that point since manipulation of the UI is contained in the controller due to LabVIEWs insistence on some controls using references (like tree controls). So the partitioning isn't as I would like. The compromise that I took was the message also contains the control name that is sent and the Controller and it updates the UI with property nodes. What this gives me is the ability to update the display via TCPIP. I think the issue is more about encapsulation of a non-encapulatable process. Whilst on the surface, it looks like "most" scenarios can be catered for. In reality, there are too many "edge" cases that require detailed knowledge and compliance of other parts of the system to operate effectively. Using a messaging "architecture" rather than "future" messages means that all the scenarios can be catered for. Well. Thats my very cursory impression at least. It just looks like too much effort for little reward (mainly due to the edge cases) and the "Shoot Yourself In The Foot" factor is quite high. Indeed. I think we are, in fact, using very similar architectures. In fact my diagram is a little incomplete. Quite often (but not always) there is a "sequence Engine" module. As I discussed previously about "logic in the dispatcher", this removes that logic into a separate unit. In this case, the Dispatcher is merely a "router" passing messages back and forth asynchronously between the various modules. It is the equivalent of your "Subsystem Controller" but operates on messaging rather than devices. That is probably me trying too hard to fit with your example. In fact the messages look nothing like yours. They would be of the form "TARGET->SENDER->CONTROL->OPERATION->PAYLOAD". They bear little resemblance to the actual UI operation. They are sent from the UI to the UI Controller (via the Dispatcher) and it decides what it means. Indeed. Very, very similar except that I also break the link with the UI so that TCPIP (for example) can manipulate just as the UI would by sending the same messages. Hmm. For most UI stuff I don't use state machines. I rely on LabVIEWs in-built state management. If I were to have a similar feature, If the button where pressed then the aquire module would be launched (dynamic loading) and when it was depressed it would just crowbar that module. Any events that come in between those "two" states would be displayed obviously. I suppose that is what I'm alluding to. That to implement futures in LabVIEW then you have to use an AppController/Dispatcher anyway. I don't really see any other way of resolving the "sometimes synchronous", "sometimes asynchronous" without it. Looking forward to the result once you have chewd it over because I'm sure if ther is another way-you will find it Wlll. For example. The "Device" is deveoped as a completely seaparte project and runs as a process. It is a toaly self conained module. To Launch it you just lay the controller on the diagram (or more often dynamically) and manipulate it using the messaging API. Dependancy on static code? Well. Difficult to say since it relies on my utilities which are used everywhere, but it is not dependant on any Application specific code. The dispatcher is the same (if being used as a router as I mentioned earlier). I may modify it with specific code and use it like a "framework", but in the former it doesn't need to know what the messages are, only where they need to go. As I have standardised much of my API messages accross all devices/interfaces, it means that it can be launched and used as-is. Although there is a huge temptation to "quickly" add application specific filtering to it. I persevere with not doing that so it remains generic. The image only shows messaging. It's not really hierarchical it's more of a "Plan View" with the controllers and interfaces sitting around a central "hub" (the dispatcher). If the UI was drawn as you have, it would imply that the UI cannot send messages directly to the other Controllers. It can. The UI Controller is more like the "Sub System" block in your diagram which would have the graceful shutdown code for the "Exit Message", for example. However, for getting a status value from the Device, the UI could send that directly without going through the controller . In fact. It is possible for all interfaces and controllers to communicate directly with each other (device 2 could send a message to Device1). But this feature has to be used sparingly. I consider it OK, for example, for the UI controller to send messages to all devices to exit. But not for Device 2 to send a "Move" command to device 1.
  19. No.1 won't work since if the queue is not created, you don't know if it's the last one or not (there may be 10) There is a No.4 Only allow 1 queue reference so that you do not need to count. This is the method I use which means that you only have to call "Destroy" at the end of everything (it also means you don't get "runaway references").
  20. I have Strange Disk Error Try turning off Windows Defender.
  21. Here ya go. Not a fan of .NET. Nasty, bloaty rubbish that labview can't keep up with for versioning.
  22. No 1 to start with. A dispatcher enables you to have both the synchronous and asynchronous. No 2 Not really. I just think it is incomplete and is limited by its message "depth". The problem with futures is that they can "sometimes" be all of the diagrams you depicted earlier. There has to be logic, state and sequencing somewhere at some point otherwise one of the three can be chosen for all. The key (I think) is to isolate the synchronous/state-full from the asynchronous and, to do that, you need a "broker" or "dispatcher" to consume un-propagated asynchronous messages and to sequence sequential ones. The argument then becomes how much logic do you put in your dispatcher (or should you rename it "controller"). For small apps you can put it all in there (i.e a sequence engine). I would argue though that there should be none and that the logic should be in a controller so that it is modularised with the device (whether that be a piece of hardware or a UI. This is, however getting more complex. Saying that though. Dispatcher in the CR uses the former (logic in the dispatcher). So I really should listen to my own arguments I'm not sure of your terminology here. When I talk about controllers, I think in terms of "device" controllers-a piece of code that sequences and abstracts complex operations into simple API commands (messages). If you are thinking in terms of the Muddled Verbose Confuser, then that is different since it is a logical separation rather than functional and aimed mainly at UIs. I, however view a UI like a "device"so the UI will have a "controller" but no "model". The closest to a "model" would be the messaging construct itself. However, I use string messages so the "logic" involved in the dispatcher is trivial Indeed. It all depends on whether you will use it in other apps or whether it is a throw-away architecture. I assume you are looking for a generic slution. The following is the messaging architecture I have used in all apps for the last 3-4 years. All of the controllers, drivers (the "device 1/2") , the dispatcher, and TCPIP module can be brought directly into other apps. The UI however cannot (every UI is different), but there is only one module to create. This might be overkill for your app, but I hope it demonstrates the role that the dispatcher plays in separating asynchronous from synchronous.
  23. Indeed. You would find it much easier with a "dispatcher" so you could have asynchronous to the UI and synchronous to the device.
  24. Well. If it doesn't show up in those two then, by elimination, you know it must be the third one. Which toolkit is it? The NI one isn't password protected.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.