Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. 22 hours ago, rolfk said:

    or took the super duper easy approach of only calling into the upper most, super easy dummy mode API that only exists to demo the capability of the DLL, not to use it for real!

    So...all of sysconfig

    22 hours ago, rolfk said:

    The problem of LabVIEW is that it allows you to call easily more than one of these functions in parallel and it doesn't break down immediately, but only after you exhausted the preallocated threads in a specific execution system. By using lower level asynchonous APIs instead you can completely prevent these issues and do the arbitration on the LabVIEW cooperative multithreading level, at the cost of a somewhat more complex programming, but with proper library design that can be fully abstracted away into a LabVIEW VI library or class so that the end user only sees the API that you want them to use. 

    But the finite number of execution threads doesn't seem like it really makes sense. I'm assuming this simplified some code a while back and probably saved a lot on performance, but something in the runtime system that instead says "crap every thread has been blocked for N ms, better make a new one" doesn't seem that crazy. Or to put it another way...yes sure you could use the async API instead of the sync one, but isn't that what we pay labview for? ;) 

     

  2. 9 hours ago, pawhan11 said:

    I think sometimes globals or DVRs will be more suitable  than messages. For example when we have a large buffer data points that some process is acquiring and stores in memory, others will use that data. By using globals/dvrs it is just basically set and get. Using messages involves flattening data to variant/string in order pass that data by message implementation. With large data this delay might be significant.

    It sounds like you still need some sync mechanism to be sure all the consumers get the data in the right order and all that (assuming your large buffer is actually a stream -- if its more of a database/tag then sure, a DVR would probably be the better implementation). The amount of overhead related to allocating a N0,000 byte buffer on a machine with 4-64 GB of ram is pretty minimal and I wouldn't step into DVRs over messages until I know for sure its too much data to keep up. To get the best of both worlds you could implement your own preallocated message buffer using a fixed size queue and some DVRs, but then we're really going off the deep end.

    Edit: also the global has the same overhead since it generates a copy every time, and a DVR will have overhead unless you process the data inside of the in place structure.

     

  3. 1 hour ago, Neil Pate said:

    The FGV (Action Engine actually) was used as the global storage of the temperatures and a mechanism for calculating the mean. The temperatures were needed all over the application (mainly for display purposes).

    Please try and remember this was six years ago, and was done way before I had started to seriously use an sensible messaging framework. I am not suggesting this is how I would do it now-a-days.

    I wasn't criticizing, just saying that with what I know now, today, I find it hard to even think about how you would use an FGV for this type of application. My thought process immediately goes a different direction.

     

    Edit: Without interjecting myself too much into the other conversation, I don't really care if FGVs are by ref or not, they are often used for the same purpose as a DVR or single element queue but have a number of disadvantages for readability and reasoning about state vs the DVR. DVR's core disadvantage is that the syntax seems to have been designed by someone who really doesn't want people to use DVRs, while FGV syntax is much more developer-friendly.

     

    • Like 1
  4. I try not to use them for new projects but for maintenance sometimes its easier and less risky to add one into the code than to manipulate all the data structures to add what I need. For new projects I'd only use DVRs or messaging. Usually messages but DVRs are certainly useful.

    Quote

     ok fine I have a "main controller" actor/process, so it can go there. However any time I want to interact with it I then have to create messages that the main controller can process

    Depending on how high performance your application is, you can usually just cheat/be awesome by separating out invariant data (for example you might read a file to find what DAQ channels to use but they never change after that, so you can pass the constant data around to all the loops) and by sending variable metadata along with the data (in your first example, maybe the loop calculating the deformation could also send "calculation performed using channels 0, 4, 7" and because the channels are known by all loops, you get to send whopping 24 bits of information and never have to make an annoying request-response message). Computers are very fast, and I think its better to waste a little bit of their performance rather than try to document a ton of extra messages. Plus sending metadata+data avoids races if something changes.

    In that first example I'm honestly not sure what you'd use an fgv for, it seems like a much better fit for messages. HW loop broadcasts temp. Calculation loop subscribes to temp and broadcasts deformation. UI subscribes to both and broadcasts control parameters. I know your description is high level but based on your description I'd immediately jump to that implementation. Could use classes too but I'd keep that inside each process (variable calculation methods, or hardware abstraction).

  5. Well its kind of a weird distinction, what works in sp1 that doesn't in rtm? And if everything still works then why break it (I say break it because you are not even permitted to try to install the drivers). 

    I can't be sure of the accuracy of this (https://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0) but it says XP is as 9%, so OK, but win 7 is at nearly 50% of all computers. It doesn't specify service pack but based on the XP numbers I'd guess NI just dropped support for maybe 20-25% of all computers in the world, overnight, without being clear and up front about it. 

  6. Well since you want comments too ;) :

    At one point I was working with some folks on making a timed loop you can plug into. We played around with a variety of features including using a custom timing source, using the timed loop abort functionality, etc. I can't remember them all anymore, but we had a lot of issues. For example, we had totally bizarre issues using a named timing source (if you didn't know, you can wire in a string name like "1 MHz absolute" rather than using the dialog)...sometimes it would work fine, sometimes the loop just would not ever run. On the 9068 in 2013 (first release of linuxrt), the 1 kHz clock in our testing had jitter of up to 1 clock cycle -- that is, if you request 1 ms it might give you 2. I don't know if this is fixed now, I just only ever use the MHz clock.

    All this is basically to say that after really trying to push on the features of the timed loop, I came away less a decent chunk of time and uninterested in using it

    The solution we went with was to just make a really simple class for a timer function, you pass in a time to wait in ns and you run this inside of a non-timed loop (to avoid timed loop overhead), which in turn was inside of a timed sequence structure (to make it single threaded and give rt priority). We didn't see any meaningful performance degradation going this route. For your purposes, you could create a 'timing source' which waits on a rt-fifo in polling mode for example, and tells you if an exit or some other message was received or if the wait time elapsed. Admittedly you do have to write the math for this and keep track of when iterations start and stop, and make it work cross platform -- all the stuff the timed loop does for you -- but you have total control over it and can stop hitting your head against the wall, so thats what we did.

     

  7. On 8/30/2016 at 2:55 AM, kull3rk3ks said:

    so am I correct in assuming that what you are suggesting is that I implement a config class of which i pass the correct child instance to the initialise() method of the messenger class and the child instance gets its init data from there?

    Not quite, an important point is moving the init to the config class -- the main class can also have an init method, but making the public API be config.init instead of messenger.init means its type safe the whole way through. The downside is you must create a config class for each messenger class (even if the only change is a constant on the diagram).

    On 8/30/2016 at 3:14 AM, shoneill said:

    I wouldn't do this as it doesn't solve your problem, it actually only makes it worse because instead of casting from Variant to whatever data you require in your ACTUAL Initialise funciton, you are trying to cast objects which is (AFAIK) less efficient.  Doing this with classes versus variants brings nothing new to the table.  You can still wire in the wrong configuration class and get run time errors.

    I'm pretty sure casting is faster if it doesn't fail because its just moving pointers around. Could be wrong, but it also doesn't really matter here.

    That having been said, see above for my clarification. 

    On 8/30/2016 at 8:43 AM, ShaunR said:

    If you look at this post (see the attachment). Yair created a transport class for some interfaces. He adds a serial a couple of posts below so that maybe useful to you.

    Its also worth pointing out that if you ever share this with anyone, the messenger name (generic as it is) is already used here: http://sine.ni.com/nips/cds/view/p/lang/en/nid/213091
    You might consider renaming to something simple like "bytestream" or whatever, since that more closely matches what it currently does.

  8. Those work, but you could also just make property node accessors for your class, initialize the parameters you need, and then call init() with no parameters (because the parameters are inside the class). I personally don't like this style but it works as long as you have good defaults.

    What I prefer is two classes: "messenger" and "messenger config". You put all your property nodes on "messenger config", then call init() on the config class. Internally, the config class can know what variety of messenger it belongs to and produce right one. I prefer this style as you can add helpers to the config class (like to/from string) and you don't have a bunch of properties on your main (messenger) class where it isn't clear what setting them will do -- for example, will setting baud rate after you call init actually change baud rate or do nothing? There is a line in the sand between the config and the running states.

    However this much effort is only useful if you're instantiating these things yourself (as the programmer). If you're just sending over a file which says 'load me these interfaces', you may as well just do init(string cfg) cause why not.

  9. for (2):

    cRIOs have a pretty wide range of performance, from 400 MHz power pc to 2 GHz x86. In any case, the results here should give you an idea of the maximum achievable performance: http://www.ni.com/white-paper/5423/en/

    However I'm extremely surprised by these rates. The last I remember seeing indicated that PXI-class machines could hit maybe 40-50 kHz reliably (doing a useful task) and cRIO-class machines were limited to about 2-3 kHz. What I would expect is something more like the results at the end of this doc (http://www.ni.com/white-paper/14613/en/) which indicate the 9068 is at 50% usage at 1.7khz. The document implies that it can go much faster, and it probably can push its way to maybe 2.5 khz, but its a dual core processor so at some point you're going to hit the limit on one of the cores.

    Scan engine is not limited to 1 kHz but I think the fastest I managed to get it on a 9068 was around 400-450 usec and even then it was flaky. Ethercat runs using scan engine and you can see the loop rates you can get here: http://www.ni.com/white-paper/52642/en/

    Read write controls are like 1 usec (each) to execute, give or take some tens of nanoseconds, for a 32-bit value -- so technically yes its possible to hit your rates but I can't imagine how much they had to tweak and fiddle with (disabling interrupts, uninstalling non-essential software, optimizing code, etc.) to get the numbers in the first link. I've never heard of mere mortals going above 2k. 

     

    tldr: I doubt you can hit those rates on any currently available compactrio target.

    • Like 1
  10. On 8/25/2016 at 11:39 PM, ShaunR said:

    No. Just because you can doesn't mean you should. :nono:

    No. That's Linux thinking and should be derided at every opportunity..:frusty:

    I guess I was thinking if he doesn't trust his user not to make infinite loops they probably shouldn't be trusted with anything else, so an isolated executable would be beneficial...but I guess mathscript can't really do anything?

    Also I'm fairly sure anything approaching linux thinking would involve the words "grep", "vim", "fork", "bash", "sudo", and "just follow these 99 very easy steps", and I included none of those words in my proposal :)

    • Like 1
  11. I've never used the mathscript node so I can't speak to that but some options that come to mind would be:

    -Use VI server to launch the VI that actually runs the mathscript in the background, then kill the VI if it takes to long to return.
    -Build the runner VI into an exe and call it, then use windows to kill the exe if it takes to long.
    -Allow your user to learn why they shouldn't make infinite loops.

    I'm partial to the third but the other two should work if nobody else chimes in with a better way.

  12. 4 hours ago, Manudelavega said:

    Our software now periodically run into critical errors where some resources (most likely DVR) are not released, and many components simply hang. I am highly suspicious that it comes from the Abort Timed Loop function. There are many operations inside the loop that function as pairs (open/release DVR in an IPE, dequeue/enqueue elements,...) and I want to be sure that one operation won't be performed without its counterpart also being performed. If we're unlucky and the abort occurs just after opening a DVR and just before releasing it, is it possible that the DVR dies?

    The reason for the abort is not too shorten the duration of an iteration, which is always pretty short, but rather to force the loop to stop if it's just sleeping and waiting until it's time to perform its next iteration.

    At this point the only acceptable solution I see would be to use a semaphore: the loop would acquire it when it starts a new iteration and release it when the iteration is done (before going back to sleep). The code sending the abort would only do so when the semaphore is available.

    Am I on the right track?

     

    It doesn't actually abort the timed loop, the help says:

    " If you attempt to abort a running Timed Loop, the Timed Loop immediately executes the current iteration and returns Aborted in the Wakeup Reason output of the Left Data node. "

    You can see this in the form of a horrible horrible example which nobody should ever follow, here: https://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/aborting_timed_structs/

    If the code is possible to run on a windows target (potentially using remote references to the FPGA for example, or if you have a HAL) then you can use desktop execution trace to see memory leaks.

    I've never heard of a DVR being unlocked but not locked again, I don't think this is possible (if it is, well thats just silly). That doesn't mean you still can't deadlock with them. I'd make sure your code doesn't have any DVR IPEs within other DVR IPEs, anywhere if you can help it. You might also check for fixed size queues or similar -- I've definitely seen people disable sections of code which read from a queue but forget to disable the enqueuer and hang their system that way. And I've also definitely seen DVR access within class property nodes which were themselves accessed by reference, leading to a deadlock. I wouldn't imagine any of this is caused by the fact that you're 'aborting' (but not really) the timed loop.

    • Like 1
  13. Do you have Vision Acquisition Software 2015 SP1 installed? Maybe the dll is included there for some reason? In any case you're probably better off posting on the ni vision forum since its an NI product. If you install the package and it doesn't work and provides no reasonable indication of why, they would probably appreciate knowing that the first time experience is bad.

     

  14. 13 hours ago, hooovahh said:

    The command line option only works if you know no other versions of LabVIEW are currently open.  LabVIEW communicates over DDE to other versions of LabVIEW, and that is why some times you'll see LabVIEW try to open the wrong VI in the wrong version.  If you double click a VI, or open it over the command line, it may choose to open it in the wrong version even if the command line specified which version to open it with.  Here is a post I made on it a while ago and many other applications (like VIPM launching LabVIEW, and LabVIEW version selection tools) suffer from this issue.  The work around is using the technique I've mentioned.  But if you can ensure no versions of LabVIEW are running, maybe by closing them all, making a VI that runs when opened is much easier.

     

    Oh thats really annoying, I assumed if you gave it a path to the specific executable you wanted it would work.

  15. 7 hours ago, MikaelH said:

    So you need to split them into multiple packages with some header data, so you can see that you have received them all and put them together in the correct order.

    This would also require acks which means bidirectional communication  (vs: " my project demands the UDP transmission. Simplex communication.").

    I don't think there is a way to do this with that constraint, and if you don't have that constraint then tcp or ftp will work.

  16. 45 minutes ago, Darren said:

    One thing I can say...I've heard customer sentiment that it's not worth the time to submit NIER crash reports, but that's absolutely *not* true. With each LabVIEW release since NIER's debut, we've fixed *several* crashes as a direct result of the crash reports that were sent in. So continue submitting crash reports (and attaching code where possible and appropriate), because improving stability by solving NIER-reported crashes is something we're committed to with each release.

    My problem is that it regularly fails to send the logs up, even when I'd like it to. (Definitely IT being annoying IT, still doesn't work though)

  17. 4 hours ago, drjdpowell said:

    I tend to use subVIs one layer lower down from the loop actions, as explained in this post.  One can have subVIs that represent actions of the loop itself (as the AF does, for example), but I don’t usually find that to have (minor) disadvantages.

    Cool I'll take a look. The separation makes sense, I had forgotten it worked that way. I suppose from that direction I've recently been doing something not too far off, which is that events change the state which leads to some actions. For example it might feed in a chunk of data and change the 'desired state' value to "initialized" and then I have a lookup table which says if current state is X and desired state is Y, your next action is A. I seems preferable to me because its easy to see what actions will be taken (once you've made the lut), but it can sometimes be tough to wrap my head around what the transitions need to look like, so I can see where setting up actions more explicitly makes sense.

    10 hours ago, ShaunR said:

    Well. This is do far off-topic I can't even remember the OPs questions :P Maybe Hooovahh can move this and the rest to another thread.?

    I dunno he was asking about architecture and such. Its not that far off ;) 

     

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.