Jump to content

smithd

Members
  • Content Count

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Depends on how fast you want it to run. You could of course store an array on the FPGA and sum it every time, but that will take a while for a decently large array. If thats fine, then you're close, you just need to remember that FPGA only supports fixed-size arrays and that the order in which you sum samples doesn't particularly matter. You really just need an array, a counter to keep track of your oldest value, and replace array subset. If you do need this to run faster you should think about what happens on every iteration -- you only remove one value and add another value. So lets say
  2. Ah, so you want documentation. Thats tougher. I found this (http://digital.ni.com/public.nsf/allkb/FA363EDD5AB5AD8686256F9C001EB323) which mentions the slower speed of timed loops, but in all honesty they aren't that much slower anymore and you shouldn't really see a difference on windows where scheduling jitter will have way more effect than a few cycles of extra calculations. The main reasons as I understand it are: -It puts the loop into a single thread, which means that the code can run slower because labview can't parallelize much -TLs are slightly slower due to bookkeeping and feat
  3. In order to better schedule the code, a timed loop is placed into its own higher-priority thread. Since there is one thread dedicated to the loop, labview must serialize all timed loop code (as opposed to normal labview where code is executed on one of a ton of different threads). This serialization is one of the reasons the recommendation is only to use them when you absolutely need one of the features, generally synchronizing to a specific external timing source like scan engine or assigning the thread to a specific CPU. So it will not truly execute in parallel, although of course you still
  4. I'm with paul on this one. I think I said this earlier in the thread but my group ran into some of the same issues on a project and ended up in a pretty terrible state, with things like the excel toolkit being pulled onto the cRIO. Careful management of dependencies is pretty easy to ignore in LabVIEW for a while but it bites you in the end.
  5. If you can reproduce it reliably and can narrow it down to a reasonable set of code, absolutely. I wasn't able to find anything like this in the records when I did a quick search a moment ago. Obviously the simpler the code, the better. Also be sure to note down what type of RT target you're using (I didn't see that in your post above) and then any software you might have installed on the target, esp things like timesync which might mess with the time values in the system. For now, I'd say the easiest workaround would be to just get the current time at the start of each loop iteration usin
  6. Wow, awesome. I was under the impression you could just use variant to flattened string which includes the type info, but maybe your VI is what I heard about. And that VI also has a different connector pane than everything else, 3:2 :/
  7. This is very tricky, and I definitely don't totally understand it. Also, despite the NI under my name I am not remotely part of R&D and so may just be making this stuff up. But it seems to be mostly accurate in my experience. LabVIEW is going to use a set of buffers to store your array. You can see these buffers with the "show buffer allocations" tool, but the tool doesnt show the full story. In the specific image in Shaun's post, there should be no difference between a tunnel and shift register, because everything in the loop is *completely* read-only, meaning that LabVIEW can referen
  8. With a timed loop I believe it defaults to skip missed iterations. In the configuration dialog there should be a setting ("mode" I think) which tells it to run right away. However if this is a windows machine you shouldn't be using a timed loop at all, as it will probably do more harm than good. And if this *isnt* a windows machine, then railing your CPU (which is what changing the timed loop mode will do) is not a good idea. -> Just use a normal loop. As for the actual problem you're encountering, its hard to say without a better look at the code. You might use the profiler tool (http:
  9. If both PCs are running LabVIEW you can use network streams and the flush command to make sure data is transferred and read on the host. Getting an application-level acknowledgement will obviously slow things down tremendously, but if thats what you need....
  10. Why couldn't you install the database on the same machine? Local comms should be fast and depending on how complex your math is you might be able to make the db take care of it for you in the form of a view or function. Unless I'm mistaken, he was pointing out that the representation of data in memory is in the form of pointers (http://www.ni.com/white-paper/3574/en/#toc2, section "What is the in-memory layout..." or http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/how_labview_stores_data_in_memory/). So you have an array of handles, not an array of the actual data. If you r
  11. I've been using source separate since 2011 and it seems to work 95% of the time. If you use any of my group's libraries which have been updated recently (much of the stuff on ni.com/referencedesigns), you are also likely using source separate code. However because of other issues we've taken to using at least two classes or libraries to help resolve the situation you describe. One is responsible for any data or function which has to be shared between RT and the host. Then there is one class or library for the host, and one for RT. I think my colleague Burt S has already shown you some of t
  12. ^^this is more or less what we've been doing with the configuration editor framework (http://www.ni.com/example/51881/en/) and its pretty effective -- CEF is a hierarchy, not containment, but the implementation is close enough. We've also found that for deployed systems (ie RT) we end up making 3 classes for every code module. The first class is the editor which contains all the UI related stuff which causes headaches if its anywhere near an RT system. The second is a config object responsible for ensuring the configuration is always a valid one as well as doing the to/from string, and the ide
  13. The dongle solution is probably better but this walks through a similar solution just using the information we can pull from the cRIO SW. http://www.ni.com/example/30912/en/ note this hasn't been updated in ages so the concepts still work but there might be easier ways to get the data, like the system config tool -- for example if you yank out serial number and eth0's MAC address you could be decently sure you have a unique identifier for the cRIO. You might also want to follow many of these steps, esp disabling the FTP or webdav server (step 9) http://www.ni.com/white-paper/13272/en/#t
  14. Yeah, XML was just a convenience but for some reason a lot of people get hung up on that. Sometimes I think it would have been better to include no serialization at all. I don't think I've ever used the XML format for anything. If I were selecting all over again in 2013, I'd pick the json parser instead (even if it is a bit of a pain to use for no real reason <_<). Anyway, the MGI files look nice, simple enough format. Good luck with the rest of the CVT.
  15. For the use he described, what would be the superior alternative? I'm on board with what you're saying, generally, but in this case...
  16. I'm somewhat biased but I'd use CVT (https://decibel.ni.com/content/docs/DOC-36858) or any of the numerous similar tools out in the world for that. Since the number and composition of the variables can also be loaded from the file alongside the data, it makes it easy to add new fields. Performance is slightly slower than a global, but not by all that much. Plus we already have a nice-ish library for copying data across the network so an HMI can update it (https://decibel.ni.com/content/docs/DOC-37226). I'd use a global if the data I'm changing were more fixed, like a stop (except I'd use n
  17. Whats supposed to be there? From the sound of it, it looks its stuff that is supposed to be accessible over at ni.com/downloads. Is something not there?
  18. another (i think) valid comparison would be to a .c file or any other single file in a text based language. You'd usually have a whole class in one file, a whole api in one file, or a whole set of related helper functions in one file. Only once in a while would you make a whole new file to contain a single function. As a result including that file includes each function in the dependencies of your project, along with that file's dependencies (although the concept of include vs code files helps with that). When the compile occurs, references to those unused functions are removed. I don't write
  19. Fair example. I tend to end up making simple interface classes for this situation but I know they aren't for everyone. Ideally I would have 3 lvlibs. A would be the messaging component, B would be the tcp component, C would just contain the set of functions which tie them together. I'm doing something similar with a file loader right now. I want a generic hierarchy of essentially key value pairs so that the lowest level component can be used everywhere. In the actual application I'm trying to write I have a file loader which loads my particular hierarchy which includes a system configuration o
  20. I'd say your lvlibs are probably too big. I tend to consider them to be a single API, not a library of many APIs. This matches well for the features of the lvlib. For example, I can add a icon overlay as part of the lvlib file. If I have too many things I can't make a single good icon. Everything in the lvlib should be something you want to be loaded atomically, in my opinion. I'm not sure what I would do in situations where the stuff in the lvlib is not an API I want to use. All that having been said, I've never really used the dependencies for anything except to get to parent classes and
  21. What you've shown is a pretty common structure and I've seen it before. I would comment that instead of manufacturer you might instead specify "family" -- if your devices use a common set of commands like SCPI* it might make sense to use that as a parent, rather than just the specific manufacturer. (*which I don't really know anything about, but it sounds like it might fit in a "family" category as it was described here) The other type of abstraction I've seen is a measurement hierarchy (ie voltage/current ->strain/accel/etc -> vibration/sound/etc). However I think this is usually b
  22. Lol well ok then, I guess I can't argue with that Perhaps in some areas, but I can say with unfortunate certainty that people still use that protocol. Very unfortunate certainty
  23. This is what came to mind for me when I read Shaun's post, but I don't know anything at all about SCPI. From what the wiki tells me it defines a generic set of messages to be used kind of like j1939 for instruments. It seems like it has just replaced one type of HAL (instrument centric) with another (network abstraction). You still need to handle communication over ethernet, usb, serial, etc. (ie what VISA does). It also doesn't seem to help at all on the device side. If I'm making more than one device that talks SCPI, I would want to have some standard network interface which takes comma
  24. I've found this to be critical, even for functions where I require overrides. There are situations (for example if you are debugging and add a conditional disable but forget to wire things through, or if you do something similar with a for loop that runs 0 times) where a class wire can get invalidated. If the wire type is that of the parent class (common), you can still get an instance of that parent class and you will never know whats going on unless you throw an error. Its pretty horrifying to realize you've spent half a day debugging a billion reentrant VIs on cRIO targets just because you
  25. I'm not totally sure I understand everything you want to do, but I'll take a stab. First, "Or should the upper layer have some sort of a periodic check of connectivity?" seems like an important part of the design. There is no way to discover the connected state unless you call some code, so the two options for that would be have some sort of thread attached to the object responsible for checking this state (ie something more like an actor) or to have the application code check periodically. To me, it feels like you only need to check for this condition when you try to talk on the network.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.