Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. So...all of sysconfig But the finite number of execution threads doesn't seem like it really makes sense. I'm assuming this simplified some code a while back and probably saved a lot on performance, but something in the runtime system that instead says "crap every thread has been blocked for N ms, better make a new one" doesn't seem that crazy. Or to put it another way...yes sure you could use the async API instead of the sync one, but isn't that what we pay labview for?
  2. It sounds like you still need some sync mechanism to be sure all the consumers get the data in the right order and all that (assuming your large buffer is actually a stream -- if its more of a database/tag then sure, a DVR would probably be the better implementation). The amount of overhead related to allocating a N0,000 byte buffer on a machine with 4-64 GB of ram is pretty minimal and I wouldn't step into DVRs over messages until I know for sure its too much data to keep up. To get the best of both worlds you could implement your own preallocated message buffer using a fixed size queue and some DVRs, but then we're really going off the deep end. Edit: also the global has the same overhead since it generates a copy every time, and a DVR will have overhead unless you process the data inside of the in place structure.
  3. I guess it could be. Lets instead say "easy to make, hard to read" vs "annoying to make, easier to read"
  4. I wasn't criticizing, just saying that with what I know now, today, I find it hard to even think about how you would use an FGV for this type of application. My thought process immediately goes a different direction. Edit: Without interjecting myself too much into the other conversation, I don't really care if FGVs are by ref or not, they are often used for the same purpose as a DVR or single element queue but have a number of disadvantages for readability and reasoning about state vs the DVR. DVR's core disadvantage is that the syntax seems to have been designed by someone who really doesn't want people to use DVRs, while FGV syntax is much more developer-friendly.
  5. I try not to use them for new projects but for maintenance sometimes its easier and less risky to add one into the code than to manipulate all the data structures to add what I need. For new projects I'd only use DVRs or messaging. Usually messages but DVRs are certainly useful. Depending on how high performance your application is, you can usually just cheat/be awesome by separating out invariant data (for example you might read a file to find what DAQ channels to use but they never change after that, so you can pass the constant data around to all the loops) and by sending variable metadata along with the data (in your first example, maybe the loop calculating the deformation could also send "calculation performed using channels 0, 4, 7" and because the channels are known by all loops, you get to send whopping 24 bits of information and never have to make an annoying request-response message). Computers are very fast, and I think its better to waste a little bit of their performance rather than try to document a ton of extra messages. Plus sending metadata+data avoids races if something changes. In that first example I'm honestly not sure what you'd use an fgv for, it seems like a much better fit for messages. HW loop broadcasts temp. Calculation loop subscribes to temp and broadcasts deformation. UI subscribes to both and broadcasts control parameters. I know your description is high level but based on your description I'd immediately jump to that implementation. Could use classes too but I'd keep that inside each process (variable calculation methods, or hardware abstraction).
  6. I think that would be "You won't believe these 10 things that happen when you install labview 2016"
  7. But...but...setting high priority should make the whole labview execution system high priority. Very weird.
  8. Well its kind of a weird distinction, what works in sp1 that doesn't in rtm? And if everything still works then why break it (I say break it because you are not even permitted to try to install the drivers). I can't be sure of the accuracy of this (https://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0) but it says XP is as 9%, so OK, but win 7 is at nearly 50% of all computers. It doesn't specify service pack but based on the XP numbers I'd guess NI just dropped support for maybe 20-25% of all computers in the world, overnight, without being clear and up front about it.
  9. Its nice to see him decide to move on. It'll be interesting to see if the company changes with Davern in charge (esp since he is the CFO/OO with a business and accounting background).
  10. Well since you want comments too : At one point I was working with some folks on making a timed loop you can plug into. We played around with a variety of features including using a custom timing source, using the timed loop abort functionality, etc. I can't remember them all anymore, but we had a lot of issues. For example, we had totally bizarre issues using a named timing source (if you didn't know, you can wire in a string name like "1 MHz absolute" rather than using the dialog)...sometimes it would work fine, sometimes the loop just would not ever run. On the 9068 in 2013 (first release of linuxrt), the 1 kHz clock in our testing had jitter of up to 1 clock cycle -- that is, if you request 1 ms it might give you 2. I don't know if this is fixed now, I just only ever use the MHz clock. All this is basically to say that after really trying to push on the features of the timed loop, I came away less a decent chunk of time and uninterested in using it The solution we went with was to just make a really simple class for a timer function, you pass in a time to wait in ns and you run this inside of a non-timed loop (to avoid timed loop overhead), which in turn was inside of a timed sequence structure (to make it single threaded and give rt priority). We didn't see any meaningful performance degradation going this route. For your purposes, you could create a 'timing source' which waits on a rt-fifo in polling mode for example, and tells you if an exit or some other message was received or if the wait time elapsed. Admittedly you do have to write the math for this and keep track of when iterations start and stop, and make it work cross platform -- all the stuff the timed loop does for you -- but you have total control over it and can stop hitting your head against the wall, so thats what we did.
  11. Ah yeah, I was actually thinking about that one one but its old and I forgot where it was. Its a good thread, if sad.
  12. Not quite, an important point is moving the init to the config class -- the main class can also have an init method, but making the public API be config.init instead of messenger.init means its type safe the whole way through. The downside is you must create a config class for each messenger class (even if the only change is a constant on the diagram). I'm pretty sure casting is faster if it doesn't fail because its just moving pointers around. Could be wrong, but it also doesn't really matter here. That having been said, see above for my clarification. Its also worth pointing out that if you ever share this with anyone, the messenger name (generic as it is) is already used here: http://sine.ni.com/nips/cds/view/p/lang/en/nid/213091 You might consider renaming to something simple like "bytestream" or whatever, since that more closely matches what it currently does.
  13. Those work, but you could also just make property node accessors for your class, initialize the parameters you need, and then call init() with no parameters (because the parameters are inside the class). I personally don't like this style but it works as long as you have good defaults. What I prefer is two classes: "messenger" and "messenger config". You put all your property nodes on "messenger config", then call init() on the config class. Internally, the config class can know what variety of messenger it belongs to and produce right one. I prefer this style as you can add helpers to the config class (like to/from string) and you don't have a bunch of properties on your main (messenger) class where it isn't clear what setting them will do -- for example, will setting baud rate after you call init actually change baud rate or do nothing? There is a line in the sand between the config and the running states. However this much effort is only useful if you're instantiating these things yourself (as the programmer). If you're just sending over a file which says 'load me these interfaces', you may as well just do init(string cfg) cause why not.
  14. for (2): cRIOs have a pretty wide range of performance, from 400 MHz power pc to 2 GHz x86. In any case, the results here should give you an idea of the maximum achievable performance: http://www.ni.com/white-paper/5423/en/ However I'm extremely surprised by these rates. The last I remember seeing indicated that PXI-class machines could hit maybe 40-50 kHz reliably (doing a useful task) and cRIO-class machines were limited to about 2-3 kHz. What I would expect is something more like the results at the end of this doc (http://www.ni.com/white-paper/14613/en/) which indicate the 9068 is at 50% usage at 1.7khz. The document implies that it can go much faster, and it probably can push its way to maybe 2.5 khz, but its a dual core processor so at some point you're going to hit the limit on one of the cores. Scan engine is not limited to 1 kHz but I think the fastest I managed to get it on a 9068 was around 400-450 usec and even then it was flaky. Ethercat runs using scan engine and you can see the loop rates you can get here: http://www.ni.com/white-paper/52642/en/ Read write controls are like 1 usec (each) to execute, give or take some tens of nanoseconds, for a 32-bit value -- so technically yes its possible to hit your rates but I can't imagine how much they had to tweak and fiddle with (disabling interrupts, uninstalling non-essential software, optimizing code, etc.) to get the numbers in the first link. I've never heard of mere mortals going above 2k. tldr: I doubt you can hit those rates on any currently available compactrio target.
  15. I guess I was thinking if he doesn't trust his user not to make infinite loops they probably shouldn't be trusted with anything else, so an isolated executable would be beneficial...but I guess mathscript can't really do anything? Also I'm fairly sure anything approaching linux thinking would involve the words "grep", "vim", "fork", "bash", "sudo", and "just follow these 99 very easy steps", and I included none of those words in my proposal
  16. I've never used the mathscript node so I can't speak to that but some options that come to mind would be: -Use VI server to launch the VI that actually runs the mathscript in the background, then kill the VI if it takes to long to return. -Build the runner VI into an exe and call it, then use windows to kill the exe if it takes to long. -Allow your user to learn why they shouldn't make infinite loops. I'm partial to the third but the other two should work if nobody else chimes in with a better way.
  17. It doesn't actually abort the timed loop, the help says: " If you attempt to abort a running Timed Loop, the Timed Loop immediately executes the current iteration and returns Aborted in the Wakeup Reason output of the Left Data node. " You can see this in the form of a horrible horrible example which nobody should ever follow, here: https://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/aborting_timed_structs/ If the code is possible to run on a windows target (potentially using remote references to the FPGA for example, or if you have a HAL) then you can use desktop execution trace to see memory leaks. I've never heard of a DVR being unlocked but not locked again, I don't think this is possible (if it is, well thats just silly). That doesn't mean you still can't deadlock with them. I'd make sure your code doesn't have any DVR IPEs within other DVR IPEs, anywhere if you can help it. You might also check for fixed size queues or similar -- I've definitely seen people disable sections of code which read from a queue but forget to disable the enqueuer and hang their system that way. And I've also definitely seen DVR access within class property nodes which were themselves accessed by reference, leading to a deadlock. I wouldn't imagine any of this is caused by the fact that you're 'aborting' (but not really) the timed loop.
  18. I mostly mean testing but I know there are some people who use a separate VM for every labview version or whatever. I've used a VM to host a code server as that made it easy to move between machines.
  19. If you have a VM program you can use these (https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/) to set up an easy deploy system. If you have windows 8 or newer, pro/enterprise, then you have Hyper-V built in.
  20. Do you have Vision Acquisition Software 2015 SP1 installed? Maybe the dll is included there for some reason? In any case you're probably better off posting on the ni vision forum since its an NI product. If you install the package and it doesn't work and provides no reasonable indication of why, they would probably appreciate knowing that the first time experience is bad.
  21. Oh thats really annoying, I assumed if you gave it a path to the specific executable you wanted it would work.
  22. Even without VI server I believe if you go to the command line and type <path>/labview.exe "<mypath>/myVI.vi" it will load the function and run it (if you have the properties set to run when opened): http://digital.ni.com/public.nsf/allkb/44E99CC41AA39F538625694B005679C0 https://zone.ni.com/reference/en-XX/help/371361J-01/lvdialog/execution/
  23. This would also require acks which means bidirectional communication (vs: " my project demands the UDP transmission. Simplex communication."). I don't think there is a way to do this with that constraint, and if you don't have that constraint then tcp or ftp will work.
  24. My problem is that it regularly fails to send the logs up, even when I'd like it to. (Definitely IT being annoying IT, still doesn't work though)
  25. Cool I'll take a look. The separation makes sense, I had forgotten it worked that way. I suppose from that direction I've recently been doing something not too far off, which is that events change the state which leads to some actions. For example it might feed in a chunk of data and change the 'desired state' value to "initialized" and then I have a lookup table which says if current state is X and desired state is Y, your next action is A. I seems preferable to me because its easy to see what actions will be taken (once you've made the lut), but it can sometimes be tough to wrap my head around what the transitions need to look like, so I can see where setting up actions more explicitly makes sense. I dunno he was asking about architecture and such. Its not that far off
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.