Jump to content

smithd

Members
  • Content Count

    755
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. For the use he described, what would be the superior alternative? I'm on board with what you're saying, generally, but in this case...
  2. I'm somewhat biased but I'd use CVT (https://decibel.ni.com/content/docs/DOC-36858) or any of the numerous similar tools out in the world for that. Since the number and composition of the variables can also be loaded from the file alongside the data, it makes it easy to add new fields. Performance is slightly slower than a global, but not by all that much. Plus we already have a nice-ish library for copying data across the network so an HMI can update it (https://decibel.ni.com/content/docs/DOC-37226). I'd use a global if the data I'm changing were more fixed, like a stop (except I'd use notifiers for that) or maybe some timing information. For your specific situation, where the count is small and timing is really not critical, honestly shared variables would probably be OK. Probably. SVs can also be referenced programmatically, so all you'd need is some VI that reads your INI file, translates the token names into variable names, and writes the appropriate value into the SV.
  3. Whats supposed to be there? From the sound of it, it looks its stuff that is supposed to be accessible over at ni.com/downloads. Is something not there?
  4. another (i think) valid comparison would be to a .c file or any other single file in a text based language. You'd usually have a whole class in one file, a whole api in one file, or a whole set of related helper functions in one file. Only once in a while would you make a whole new file to contain a single function. As a result including that file includes each function in the dependencies of your project, along with that file's dependencies (although the concept of include vs code files helps with that). When the compile occurs, references to those unused functions are removed. I don't write a lot of C but thats how I understand the process and it feels similar to me. On the other hand Jack's main point was that lvlibs don't really work that well, which is fair.
  5. Fair example. I tend to end up making simple interface classes for this situation but I know they aren't for everyone. Ideally I would have 3 lvlibs. A would be the messaging component, B would be the tcp component, C would just contain the set of functions which tie them together. I'm doing something similar with a file loader right now. I want a generic hierarchy of essentially key value pairs so that the lowest level component can be used everywhere. In the actual application I'm trying to write I have a file loader which loads my particular hierarchy which includes a system configuration object that contains N process objects which contain M plugin objects. So now that I've gotten the two libraries where I wan them, I'm writing a 3rd library which converts between the very generic hierarchy and the specific hierarchy, as well as allowing for some useful manipulations of the data set (lets say I only want to load process #4 and its plugins, but not the rest of the system). Edit: Its worth mentioning that while this is something I am definitely doing, I'd probably simplify it if I were writing from scratch -- I'm trying to refactor months of work so it can be used on a few different projects, which is why having these weird dependency links is currently desirable. Downsides to this are that it can't always work (but usually works enough for me to not worry about using lvlibs) and it does lead to occasional instances of bloat and situations where I have to convert between the types used by the different libraries, but it seems to work well enough for my purposes.
  6. I'd say your lvlibs are probably too big. I tend to consider them to be a single API, not a library of many APIs. This matches well for the features of the lvlib. For example, I can add a icon overlay as part of the lvlib file. If I have too many things I can't make a single good icon. Everything in the lvlib should be something you want to be loaded atomically, in my opinion. I'm not sure what I would do in situations where the stuff in the lvlib is not an API I want to use. All that having been said, I've never really used the dependencies for anything except to get to parent classes and the like, so I'm not sure what sort of filtering you want to do on that information. But I would also say that keeping dependencies in lvlibs makes your dependencies organized -- you don't have 1000 stray files from VI lib, you have 5 classes and 5 lvlibs. In fact, I have the reverse problem you have. Since labview counts things in lvlibs as dependencies but doesn't necessarily load them into memory, things like find and replace or "show error window" don't actually work on them unless you manually open the front panel of every VI.
  7. What you've shown is a pretty common structure and I've seen it before. I would comment that instead of manufacturer you might instead specify "family" -- if your devices use a common set of commands like SCPI* it might make sense to use that as a parent, rather than just the specific manufacturer. (*which I don't really know anything about, but it sounds like it might fit in a "family" category as it was described here) The other type of abstraction I've seen is a measurement hierarchy (ie voltage/current ->strain/accel/etc -> vibration/sound/etc). However I think this is usually built on top of a HAL so it may be an entirely different boat that isn't too useful to you. Something else you might want to decide is if you want the abstraction to be called directly by other code or if you want the functionality to be in the form of actors (either actor framework or regular queued message handlers). A neat, but probably more complicated than what you need right now implementation was developed by Eli Kerry and is available here: https://decibel.ni.com/content/docs/DOC-21441 I wouldn't necessarily recommend this for 1.0, but you did ask who else has done this and its a pretty neat design in my opinion. Finally, if you need some help convincing your colleagues (or teaching them), this looks like a decent webcast (also from Eli Kerry): http://ekerry.wordpress.com/2014/06/10/introduction-to-object-oriented-programming-for-hal-design-in-labview/ My guess would be that it wouldn't benefit you personally (seems more basic than what you already seem to know) but again it might benefit some of the guys around you.
  8. Lol well ok then, I guess I can't argue with that Perhaps in some areas, but I can say with unfortunate certainty that people still use that protocol. Very unfortunate certainty
  9. This is what came to mind for me when I read Shaun's post, but I don't know anything at all about SCPI. From what the wiki tells me it defines a generic set of messages to be used kind of like j1939 for instruments. It seems like it has just replaced one type of HAL (instrument centric) with another (network abstraction). You still need to handle communication over ethernet, usb, serial, etc. (ie what VISA does). It also doesn't seem to help at all on the device side. If I'm making more than one device that talks SCPI, I would want to have some standard network interface which takes commands and passes them to an abstraction layer before returning the response. The abstraction layer would handle the differences between my devices when responding to each command. Anyway, to me lvoop amounts to dynamic cluster with a function pointer and occasional linking issues. For when I need to have a dynamic cluster and function pointer, I use lvoop. When I don't need those things I don't use lvoop.
  10. I've found this to be critical, even for functions where I require overrides. There are situations (for example if you are debugging and add a conditional disable but forget to wire things through, or if you do something similar with a for loop that runs 0 times) where a class wire can get invalidated. If the wire type is that of the parent class (common), you can still get an instance of that parent class and you will never know whats going on unless you throw an error. Its pretty horrifying to realize you've spent half a day debugging a billion reentrant VIs on cRIO targets just because you dropped down one disable structure and forgot to wire through the enabled case. tl;dr: throw errors in parent functions which shouldn't be called. Its a good idea.
  11. I'm not totally sure I understand everything you want to do, but I'll take a stab. First, "Or should the upper layer have some sort of a periodic check of connectivity?" seems like an important part of the design. There is no way to discover the connected state unless you call some code, so the two options for that would be have some sort of thread attached to the object responsible for checking this state (ie something more like an actor) or to have the application code check periodically. To me, it feels like you only need to check for this condition when you try to talk on the network. For example, if your TCP client disconnects you don't know until you try to perform read or write. That is when LabVIEW informs you. As for implementing these checks, it sounds to me like the answer might be the http://en.wikipedia.org/wiki/Template_method_pattern. As i understand this pattern, you would basically implement "ExecuteI2CTransaction" as static dispatch and then provide a series of steps you want to call ("check disconnected", "validate inputs", etc, etc "finalize transaction") and then some of those steps (like "validate" and "finalize") would be dynamic dispatch. This ensures that you have some structure you can enforce while still allowing for the objects to function as a HAL. Do either of these help, or am I missing the point entirely?
  12. Yes and no. I've had to characterize this recently with a cluster of various arrays and I've found using the RT trace toolkit (9068 in LVRT2013) that: -If you pass a dataset into a queue, the original (upstream) producer of that data set must generate a new buffer because the current buffer is handed off to the queue -If that upstream producer is an empty queue, an empty buffer must be allocated. For some reason I don't understand this ends up with like 5 wait on memory flags in the trace. -If that upstream producer is a full queue, no new buffer will be allocated -If the buffer (pulled out of any of the queues) is fed to another API, like IMAQ, you'll end up losing that buffer and you'll need to allocate a new one, unless the API can avoid stomping on the data. tl;dr: in normal operation you won't make a copy of the full dataset by using a queue until you pass that dataset to someone else. For determinism purposes dequeuing an empty queue will cause an allocation, which is why we have RT fifos. If you can avoid another API stomping on the data, you can pass a set of buffers through a loop of queues (A->B->C->A...) without allocations. Obviously all of the above is for my use case only and you should always use a trace tool to confirm that your code behaves as expected if performance is critical to your application.
  13. I don't mean to be that guy, but I think creating an express VI would get you half-way there. Express VIs can go on the palettes and can contain any code. Once you drop one down, you can right click and select "open front panel", and it will prompt you to convert the VI from an express Vi into a normal VI. You can then save this converted express VI and boom, you have your desired outcome. Negatives that I am aware of: -Your code starts out in humongous express vi form (rather than as an icon). -Your code would, technically, be an express vi. That having been said, if this is a common use case for you I think making some sort of tool would be the cleaner way of doing it.
  14. Do either of these help? (I have no clue why there are two, they're basically the same ) http://digital.ni.com/public.nsf/allkb/410F2EC66F60F9B0862569EE006F4FA0 http://digital.ni.com/public.nsf/allkb/705C2ECA081F3C7986256C0F00559B02?OpenDocument The real question I have to ask is if you need the call to be dynamic, or just asynchronous? If it does not need to be dynamic, just use a static VI reference. This will ensure app builder finds, includes, and loads the appropriate code into the exe. This is what many of the labview examples do, so take a look in the example finder (specifically the section on controlling applications programmatically, which is code for "VI server"). If it absolutely needs to be dynamic, then you're better off following the KB steps to set the VI in question (St1-TCP.....) as always included, specify a specific folder relative to the exe (for example ../dynamic dependencies), and use a relative path rather than the fixed path you have there.
  15. Are you referring to the "Reason-Phrase"(http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html) ( If so, I've tried everything I can think of and have been unable to come up with a way to set that field. LV seems to default to using the ones defined by http. If you just want to send data back you can always just use "set response.vi". I just used that setting the header to 401 and writing a response string ("OH NO! ERROR!" in my case) and the browser correctly displayed that string despite the error code. Are you saying this didn't work for you?
  16. I think most people would disagree with me, but I think of it like this... it feels to me as though a VI is only something where the UI is intended to be used (even if just as a dialog or debug screen). Anything else, which is intended to always be called by something else with a UI is a subVI, function, or subroutine. I think I personally would just call it "a function to do blah". I mean, technically I don't care if its written in LabVIEW, just that it does what I want it to do This becomes tricky with RT, where the top level VI should the the only function in your entire codebase which *doesn't* have anything on the front panel, but...its still the thing you go to in order to run the system, so thats still the "UI".
  17. Something posted above makes me less confident about my use, but I have a set of objects which store configuration information and a set of objects which present a UI for that configuration. However, I don't want the config objects to know about the UI objects *and* this all has to be in a plugin system. I want it so be {some subset of N}:{some subset of N}. That is, editor A may support configurations 1 and 2, but configuration 2 may also be used by editor B. So what I did was I made sure the editors know what classes they can support and use preserve runtime class to test every available configuration object (1, 2, 3) against the configuration objects supported by every editor (A, B, C). This seems to successfully let me know if (unknown config object) is supported by Editor A, B, or C. I don't know if there is a better way to determine class hierarchies in the runtime environment, but this is what I came up with.
  18. Glad to hear it. One other tip you might keep in mind is to ensure you have a timeout of 0 in your DMA loop, or otherwise fairly tightly match the timing of the loop with the timing of the DMA. Any time period in which the DMA node is waiting on data is a busy wait--that is, is is hogging the CPU looking for data. This will definitely hurt the performance of your system (even if you are able to keep up, it can't hurt to do a little bit better ). This limitation is resolved in the shiny new cRIO released a few weeks ago, and the solution is documented here: http://digital.ni.com/public.nsf/allkb/583DDFF1829F51C1862575AA007AC792
  19. It does sound like you should be able to get better than that, but its hard to say without breaking up the problem a bit. One option is that the code is doing something strange which limits your max log rate. I'd try running the code and have it log to the local disk and see what rates you can get. Then you can compare to these benchmarks, which are on the main disk: http://www.ni.com/example/31206/en/#toc7 (the sbRIO should be a little better than the 9074, but likely not as fast as the 9068). Assuming you get the expected rates when writing locally, you can start looking at the USB. There is absolutely no way you're going to hit 480 Mbit/s -- thats the *bus* limit, not the rate at which your CPU or the attached device can log to disk. But you can still look at the performance and try to improve it to meet your needs. One option is trying a different USB stick. Another thing worth checking is the formatting. I believe the sbRIOs should be able to handle FAT16 or FAT32, but you won't get anywhere near the speed of an NTFS system. (Source: Random person on the internet: http://www.msfn.org/board/topic/125116-fat16-vs-fat32-vs-ntfs-speed-on-usb-stick/)
  20. Hey sharkera, That toolset actually consists of a few things. The first package, tagwebclient, is a javascript+HTML web page along with some LabVIEW type definitions which are intended to allow you to create new web services which provide tag-oriented data to the web using a standard interface. At present it needs some work, but the basic concept is down. For example, there is a "tag browse" page which requests the available list of tags from a correctly-formatted web service. The second package, cvt web addon, is a LabVIEW web service which uses the CVT and aims to meet the contract required for the tagwebclient. Originally, these packages were combined but we are interested in making a web service which supports the new tag bus library (which is basically CVT on a wire): https://decibel.ni.com/content/docs/DOC-36995 Finally, the cvt web addon does have a sample websocket server. However, to the best of my knowledge it currently just supports sending strings back and forth. It is not used to transfer tag data -- standard labview web service functions are used for that. Thanks, Daniel
  21. Sorry about that. The link was to a poll on the CLA community, as Christian mentioned. The results indicated heavy support for opening the GDS toolkit which is now covered under the process identified here: https://decibel.ni.com/content/groups/community-source-of-ni-code This new discussion was intended to be more open-ended than the poll. It is posted here: https://decibel.ni.com/content/thread/23441
  22. Do you still have this (or have you posted it somewhere)? When I found found my most recent issue with property nodes (which is fixed) I went through code manually to do the same. Interesting changes. I wasn't sure whether writing the same value would be optimized out so thats why I made a new random value for each run. Is there an easy way to see what code gets eliminated? The reason I personally don't like property nodes is because of the error checking. I know a lot of people disagree with me but I prefer not having error nodes if you don't expect that code to throw an error. Accessors shouldn't throw errors (unless I know I want to do validation on it). Property nodes force me to add error wires which I never expect to use. However, property nodes do keep things clean in some cases and they really improve discoverability of data, so I see them as a necessary evil
  23. I was curious so I put something together (well most of it was stuff I've had lying around from previous benchmarks). I figured my first go at the benchmarks would be wrong, so there is a script to generate them. 1. Run gentests/gentests.vi 2. Open Benchmarker.vi and set the path to a generated report file 3. Run benchmarker.vi My first attempt is attached. If you want to change the benchmarks, they are the templates in the folder. From this first run, I don't see a difference, but as I said my first attempts are usually wrong when it comes to benchmarking. propbenchmark.zip first run.zip
  24. Something stupid I've done in the past when I'm in a hurry and can't figure out what that VI wants from me is to create something known (like a numeric) and then on the newly generated control, immediately call replace. Replace has a path input. I'm not a big fan, but it usually works.
  25. If anything, issues with property nodes have gotten better over the last few years -- at least in my usage. All of the really bad bugs I've seen have been fixed as of this most recent patch (see this recent thread for an example: http://lavag.org/topic/18308-lvoop-compiler-bug-with-property-nodes/). But, there are still a lot of usability annoyances related to case sensitivity. For example, if I name my folder Properties in the parent class and "properties" in the child class, you'll see two property items which link to the same dynamic dispatch VI. When a recent code review found a misspelling in the property name of a parent class, I had to go through every child implementation (~15 at this point) and fix the spelling there, because the classes were broken somehow. This and more is tracked by CAR 462513. Finally, I remember seeing Stephen post somewhere that property nodes, due to their inherent case structure, prevent LabVIEW from optimizing the code -- even if the VI is set to inlined. For this reason alone I avoid them in any code I expect to be running at high-priority. For UI, however, I use them extensively.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.