-
Posts
763 -
Joined
-
Last visited
-
Days Won
42
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by smithd
-
Well zeromq already has pretty solid labview bindings: http://labview-zmq.sourceforge.net/ I've used those a little bit on windows and also tried it out on the 9068 back when it was released, and the guy who wrote it uses it on linux too. I'm not familiar with either of the others, though.
-
I don't know if there is a good way to have a zero allocation structure because it would have to be something that would be used universally (or else you'll have some using a standard error and some with an rt error) but whose RT-ness could be turned off in favor of a more verbose output. One option to aid in this could be to make wrapper functions which have conditional code so when it runs on RT and the flag "RTVerbose" is not set to true, all of the dynamic allocations are removed. Except really you don't want this at all. What you really want is the less verbose version when on RT, and RTVerbose!=true, and the code is inside of a timed loop or above normal priority VI, and currently I don't think there is a language construct to do this (although I've asked for it ) Back to the general point about how it would look, I personally tend to think it should be a int+a dictionary (ie variant lookup in our case). I suppose a class could do it too (base class is int-only, then verbose class has source, call chain, etc, then user classes have custom data) but then there is a ton of code needed to generically access that information. All that has basically already been written for variants with the various probes, xml flatteners, etc. Some of the other threads mention multiple errors. I think the simple dictionary would also do a better job of handling multiple errors than something like an error stack. Just thinking out loud here, but it seems to me that for any given chunk of code there should only be one error--every other problem can be traced back to that. I think multiple errors come in handy in two situations: 1-Combining the error from parallel chunks 2-Init, where you want to know everything that went wrong so you can fix it. (2) would really make more sense as a custom field, which is what I do now -- Tag not found, Append: Tag1, Tag2, Tag3 would be converted into MissingTags=["Tag1", "Tag2",...] (1) would make more sense as a named error field -- ie rather than Error[0], Error[1], Error[2] you want to see FileLoggerError=7, FTPError="Thank you for using NI FTP", etc. The only thing I'm really certain of is that I wish the boolean were gone forever
-
Are you using timed loops with the time source set to absolute time? Since the loop is tied to absolute time I've seen this cause issues when the system time is modified. Do regular while loops freeze too? How are you determining that things have frozen? Are you running in interactive mode? If so, does it disconnect when you trigger this change? Are there any other sections of your code where you are changing behavior based on a timestamp?
-
I don't disagree in general, but the express VI settings help provide for some of this. For example a given instance of the express VI might be configured to clear error 7, while another instance is not. Still doesn't provide source, though, which I agree is unfortunate. The devzone paper also describes using it super-locally to handle things like retries. And of course in some cases there is nothing to do locally. Its the middle range of issues where the handling is more complicated than just retry but not bad enough to just shutdown where there are challenges using the SEH. We ended up doing something I think is similar to what you described, but of limited usefulness since its within our framework. Code modules synchronously return error codes to the caller and provide a method for categorizing them (no error, trivial, critical, unknown) and then the calling code has a set of actions it can take, and the mapping from (module, classification)->(action) is all configuration based. The actions are things like shutdown, go to save state, log, or reinitialize module. The caller is also responsible for distributing error codes to any other module that cares, so for example control code can be informed if scan engine had an error.
-
Oh well Edit: Was just looking back through it and I remembered all the issues I had getting it to work at all. Things like guids and paths just didn't get set as I would expect. You can always dig through some of the other code in the same directory, but its pretty hard to understand what the different functions are doing.
-
I put together something which imports a web service into your project from a template. Its really hacky and may not work for all web services (in fact it no longer works for the web service I originally wrote it for, although I plan to fix that at some point). The code is on this download page: https://decibel.ni.com/content/docs/DOC-38927 Specifically ni_cvt_web_addon-1.6... I'd just unzip it and open the vi ni_cvt_web_addon-1.6.0.1\File Group 0\project\AddCVTWebServiceToProject.vi Also, be sure not to judge me based on that code.
-
Old content remains in lvclass files
smithd replied to Steen Schmidt's topic in Object-Oriented Programming
(A) Don't know why its an unusual size. (B) If you delete the entire genealogy section in the xml the size goes back to normal. © I've never had this cause an issue, but I would imagine its not officially supported. The genealogy is probably unrelated to those issues you posted, but it will cause issues if you try to do unflatten on old data. I don't have a need for this feature, so I delete the data pretty regularly. Makes RT deploys a bit less painful too, or at least it does in my imagination. -
why so little love for statecharts
smithd replied to MarkCG's topic in Application Design & Architecture
The way I interpret some of that stuff is that state isn't evil, its the combination of state and action. For example, consider the difference in labview between a subVI which includes an uninitialized shift register vs one which simply takes data in (from wherever), processes it, and returns a result (to wherever). The second is a whole lot easier to understand, to prove correct, and to test. There is still state, its just been moved up a level or two or.... Also, I personally found this paper interesting and very helpful in understanding those functional programming crazies. -
I remember someone having issues with DAQ threads at some point and fixed it in an exe. I believe all you have to do is copy the appropriate INI keys from labview.ini into your exe's ini file and the runtime will allocate the right number of threads on launch.
-
Dynamic dispatch & Shared reentrancy
smithd replied to Nicolas Bats's topic in Object-Oriented Programming
Because I am not all of NI...I'm not even in R&D. (But also what Shaun said) -
Dynamic dispatch & Shared reentrancy
smithd replied to Nicolas Bats's topic in Object-Oriented Programming
For FPGA most everything is reentrant but really I'm referring to things like delay, rising edge, etc. functions which store previous state in a feedback node. This would cause a problem if you made any such function non-reentrant but this isn't the default. But...I actually do mean going to the extreme of never storing state in a VI except in these special cases -- with DVRs I don't think an uninitialized shift register is really appropriate anymore. For the analysis functions I don't think there was ever a need to store state inside--they're math functions! Or they could go even further and make any branch perform a ref count, then deallocate the refs as soon as the last calling VI with that branch of the wire goes idle...you know like when it frees all that memory in your 5M element array -
Dynamic dispatch & Shared reentrancy
smithd replied to Nicolas Bats's topic in Object-Oriented Programming
It could totally free that memory if it really wanted to. It doesn't have a problem. Only if you store state inside of your VIs, which I personally think is a bad idea with a few exceptions (FPGA, action engine operating on a singleton resource, etc). Pure math functions are certainly not on the list. Also, staab posted up this suggestion a while ago: http://forums.ni.com/t5/ideas/v2/ideapage/blog-id/labviewideas/article-id/19226 Didn't seem to be popular for whatever reason, I think he probably just didn't explain how big the issue was. Either way, my hope is that enough people know this limitation by now that they'll stop making functions with internal state, but we'll see. -
Yeah I'd expect something like this (http://www.ethercat.org/en/products/92F6D9A027D54BABBBEAFA8F34EA1174.htm) to be right right answer, but I can't find one for PXI(e). Modbus is not a particularly fast protocol and its also not that easy to use (there is only one data type). Do you have performance constraints for how fast you need to move data around or what the latency should be like?
-
No, the only slave is the 9144, PXI is usually used as a master. Out of curiosity why would you want to use a PXI chassis as a slave? I normally see PXI has a high speed/throughput acquisition system unless veristand is involved. Is there a PXI-only feature you need which isn't supported by the 9144? If you don't need deterministic single-point data communication, does the beckhoff controller support other protocols?
-
So I mean they were trying to get there with call and collect, I just don't think its very user-friendly. Your concept makes a lot of sense, and it would be handy for all the different types of call-by-ref. Instead of a single node, we have "latch values", "run", and "get values", and I suppose we'd have to have a function to "get instance from clone pool". The other usability items I can think of: Timeout on the wait/collect node Easy way to abort reentrant clone pools if we need to shut down (probably solvable if we had a "get instance from clone pool" function). Improved type propagation so if you update your connpane it doesn't break everything in your code (this seems to happen more with objects...to fix this I've resorted to just feeding a variant through everywhere :/) Decorate VI server references with different settings, so you don't have to remember the correct call by ref setting (and so the compiler could check the type for you--if you say "open this for reentrant run" and the VI isn't reentrant, it should break. Some of the functional programming/lamda discussions from one of the other forums would be handy (I was thinking earlier "well hey most of this I could probably do with an xnode" and then I realized that I'd need to make a second VI, and this would be solved if I could script the VI I needed inside of the node...) How do those sound to you?
-
Looks pretty straightforward. On the one hand I like the type safety the object gives you but on the other hand, objects are a pain to use in large quantities. I hate the whole documentation, inheritance, etc process. I know there are some fixes out there but really I'm ok with giving up type safety in exchange for just passing in a vi server reference...of course nothing about your version prevents someone from doing that. May just have to use yours in the future
-
Since its a simple change I made a branch here: https://github.com/smithed/taskpool/tree/removecancel I think I like it better, but I'm still thinking about it. Edit: yeah I think it makes sense to leave that to the end user. It should be easy to make a task which supports a custom cancellation mechanism if needed.
-
Meh, you're right. I hadn't thought about that issue...the DD calls will eventually all add up, and they'll be shared across all the call pools probably. Oh well On the adv vs. easy API topic what I was considering was creating a FGV which has the same behavior as what you and AF have, so it would initialize a default call pool and provide a simple 'run task' function you can just grab and use. But having a backing API makes me happy I've been kind of on the fence about the cancellation thing since I made my UI example, as its kind of hard to keep track of. It felt like it would be easier to ignore a result than to cancel *and* ignore the partial result. That having been said, my goal was not really to make shutdown faster, just to let the task know we don't care if it finishes...but then we get back to if there is really a benefit. I tend to think that for my purposes I'd chose to avoid modifying the state of the system, so cancelling really just saves CPU time, which isn't an issue. Since any really long running tasks (like wait on TCP or whatever) can't be effectively cancelled, it makes me think your suggestion of eliminating for simplicity is the right one. Will think about it more.
-
I'm not sure where but I'm guessing GDS has some scripts you can use as a starting point, and the master branch is in 2012. Edit: didn't realize its GPL. May cause issues for you.
-
Ok I made the changes you suggested and I think I like it better this way. Also I realized I forgot to address one point on your #1 Even if I did change it to just support the standard execution system, I still prefer having a specific 'context' or whatever that everything runs on, rather than using what is basically an inaccessible FGV. If you want a semi-real reason its this: the async call pool can only grow, which makes it kind of scary to me to use in a long running application unless you have the ability to shut down the entire clone pool (which I don't think you can do in AF or yours). Having a separate reference to a specific clone pool means that as you launch or shut down parts of your application you could launch and shut down the paired clone pool. Not a huge deal, but just makes me feel more comfortable using it.
-
1- ^^what he said, its mostly just there if you know you're going to block an entire thread doing something. For example with http get I believe its calling a dll, so you're blocking a thread during that process. Same thing with some of the other I/O types. Its not clearly documented, but those inputs can be completely ignored and it will automatically create a pool of size 10 on the standard exec system and always run it there. 2-I thought about that one a lot and went back and forth. On the one hand I liked the idea of batches of batches, and of course you can still do that with your own tasks. But I figured that it could be handy to focus things down somewhat, which is why I added the actions. That way people who are afraid of objects can use callbyref actions, people who are ok with objects can make new actions, and people who want to use all the features can make tasks. --> At the same time, given that the inputs are basically the same, I kind of see what you mean. It probably makes sense to merge them. 3-I also went back and forth on this. First, you're absolutely right. But... it kind of simplifies things to always have a 'parameter' input that you can call on any type of action. I think what would probably be the best solution would be to remove it from the parent action (which combined with #2 above would basically mean i delete action entirely) but leave it on the callbyref class since thats supposed to be the easiest to use and there has to be a generic parameter input on that one anyway.
-
I thought about yours and af before moving forward on this and decided it still made sense as more of a loop co-processor than as a dedicated logical actor. That is its more of an off-diagram "helper" loop, in your terminology. I may need to go back and look at the code, though, as I was under the impression that everything in there was an actor and I wanted to avoid that because you still have the problem of clogging the QMH. If every instance is its own async call then I think I must have just missed the right spot in the code or misunderstood. Looking at it again, now that I'm looking in the right place, it looks like yours does most of the same stuff mine does. I had been under the impression that your library was more focused on communicating between actors but now I see its way more general-purpose. I can't access the google+ or youtube page. Could you upload the slides here or just describe the race condition problem? I think we're talking about similar sets of issues but I don't see them as being all that horrible, so I'm curious why you're so against the idea. I tend to think it just ends up being a lot more work than it needs to be to make good code.
-
Yeah, thats correct. I had been learning a bit of c# and thought the ease with which you could run things in the tpl was impressive. Mine is...nowhere near as fancy, and probably never can be, but the intent was similar.
-
Hey all, I've spent a little time here and there working on this and I figured now was the right time to ask for feedback. Typically when making a new UI I'll use something like AMC and have a producer (the event structure) and consumer (a QMH). This is the standard template in AMC (image here) and its also used in, for example, the sample projects. This is ok and has done well for a long time, but there are weak points. (a) the QMH can get clogged. After all you're sending all of your work down there and if something is slow, the consumer will run slow. (b) This pattern seems to always end up with a weird subset of state and functionality shared between the two loops. For example maybe your UI is set up to disable some inputs in state X, except that its your QMH, not your UI loop, which determines that you're in state X. So, maybe you send a message to the QMH, it takes some time and so your user is able to press buttons they shouldn't be able to. You fix this by putting the disable code in your UI loop, but then you need both loops to know that you're in state X. Another example: if you're using some features like right click menus, you need to share state between the UI and the QMH so you can generate the appropriate right click menu. Theres many examples like this. None of them is particularly heartbreaking, but my hope is that this is a better way. At one point a few months ago I was in a conversation with R&D about events and we got onto some of these issues. Aristos and some others pointed out this was basically making two UI threads and suggested pulling everything back into a single loop (just the event handler) but then using async call by ref to take care of all the work that takes more than 200 ms (or whatever you personally consider the cutoff to be). This solves both problems because (a) async call by ref has a pool of VI instances it can use, so the code never blocks and (b) you only have one loop for the UI and associated state information, so there are fewer chances for weird situations. Since the code for doing all that manually is kind of tedious, I put together this prototype library to hopefully make the above design really really easy. Feedback I am looking for: -is this a worthwhile pursuit at all? (ie do you agree with the first couple paragraphs above?) -has this been done before (I searched and searched but I may have missed something) -any thoughts on this first draft at implementation? The code is here and examples are in the project or here. The main example is "example UI get websites" but this example also requires the lovely variant repository. Not for any particular reason, I just like it. There are more details about the code in the readme.
-
I'm not sure if it fits your needs but have you looked at this? http://www.ptpartners.co.uk/ptp-sequencer/ It looks pretty cool (haven't used it but theres a video series on that page). It seems to cover all the basics you're developing here and it's got a 'run next step' function you could call from anywhere including an actor. Seems like it might fit your needs. Also on the tools network: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212277