Jump to content

PaulL

Members
  • Posts

    544
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by PaulL

  1. That's really what I was getting at. I'd like to ensure that maintenance for forward compatibility occurs. (Some pieces may not need future changes, but I am guessing some will.) Sounds good.
  2. Daklu, I, too, am very interested in learning more about what you are trying to do. Before I sign on, I'd like to know a bit more about your vision for this. (I don't have much time to work on things outside my current project, but if the dividends from common development were significant then I could justify putting some effort into this.) By the way, I like certain things about OpenG but one thing I personally am not so fond of is that using a simple function (take Filter 1D Array, for example) usually fills up my dependencies...user.lib with many functions, and this information is not structured. That may not be much of an issue for the project under discussion since presumably everything would exist in classes, so that the dependencies would have class library structures, but I decided to mention this because in practice this (and a couple unavoidable issues with OpenG--no guarantee of forward compatibility, for instance) troubles me enough that I almost never use OpenG methods in my own projects (though I realize many developers do this quite successfully!). On the other hand, I think OpenG offers an existing popular distribution mechanism. I wonder if there might even be an option to sell the resulting IP to NI for implementation in the core LabVIEW libraries.... Paul
  3. In my opinion, yes! It would change your approach a bit, but if you were willing to use Object-Oriented design patterns you could use a combination of the Command Pattern (in particular, you could include the target controller as an object inside the command, thus handling all commands for all controllers through a single--preferably asynchronous--message channel) and the State Pattern. You can create a Controller class and then create subclasses that inherit from this.
  4. Are you sure there is no way VI B can't operate on the array? If not then I think you will have to put n instances of B on A's block diagram. (This is annoying and even more trouble since you don't know the array size ahead of time. If you at least can bound the array size you can use case structures, but this is ugly.) If there is any way to operate on the array that is the way to go!
  5. Yes! We use Subversion and JIRA. When we check in changes (to our code or model, say) and reference the JIRA issue, one of the Atlassian tools (Crucible, I think) updates the JIRA issue with the information from Subversion. It works great! We have found JIRA to be an outstanding tool in general. Paul
  6. Ben, Yes, this is definitely a limitation with the shared variable nodes. DataSocket and the new-to-2009 shared variable API methods accept a URL (though of different formats, so I guess the latter isn't truly a URL!), which allows remapping at run-time. In practice, however, I still use the nodes, because I don't have to pass references to maintain buffering and performance when I use the nodes, and I think the nodes are a lot easier to read. Also, I find that when I want to "redirect" the shared variables I do this because I want to "clone" a behavior, and I find it simpler to rename the library or copy it (and in the latter case I have started writing override methods to point to the shared variable I want). I'll explain that last part. I may have a high-level state shared variable that I reuse in all components. I write the overall behavior in a Component class, but I have an updateStatusSV method that each child overrides to point to the variable for that particular component. (In all fairness, one could argue there is a slight cost to readability, but I think the cost is quite minor.) If I really needed to change the logging levels programmatically or something like that (which may yet happen) I would definitely have to do more programmatically. My other uneasiness about DataSocket with shared variables is ambiguity: 1) I can specify a shared variable URL with dstp or psp. I know to use psp, but the choice is strange, and I'm not sure I understand the differences precisely. Besides, why do I need a "DataSocket" API to communiciate with PSP? That has never made sense to me. 2) The Data Socket URL does not provide exactly the same functionality as the shared variable API. (The differences are minuscule perhaps--for instance, an indicator bound with DataSocket is not capable of blinking while in an alarm state--but then I guess my concern is that I don't know exactly what the differences are.) Oh, and and when I bind a control I can use DataSocket or NI-PSP, but when I use a DataSocket Read VI on a block diagram I can specify a dstp or a psp URL. NI does not use the terms consistently. By the way, I am definitely not arguing against using the DataSocket API. I think it is an entirely valid and effective API, and it clearly provides some capabilities that shared variables nodes do not. What I would like to see is a single unified shared-variable-only API that cleanly implements all the allowed functionality unambiguously, such that the decision to use this API would be the obvious choice. Paul
  7. Some comments: Regarding scalability: 1) Performance: I created my own bench test a while back, programmatically creating 1000 SVs and writing 1000 messages to each. I don't remember the exact numbers, but the performance was quite good. 2) Ability to configure SVs easily: What exists: a) Control binding i) Manual configuration of each control ii) Front Panel Binding Mass Configuration tool iii) Programmatically updating the URLs with the DataSocket API b) Configuring SVs i) SV library interface (good but not ideal for a very large number of variables) ii) Multiple Variable Editor (convenient spreadsheet-like interface for each library) iii) DSC SV creation and configuration methods iv) To clone a library (same variables, new library name) one can just use a method to copy it on disk. This may be a personal thing, but I don't like using the DataSocket API for shared variables in general. (The read/write SV methods only use buffering if you pass the reference, unlike SV nodes, which also make the code much easier to read. Moreover, I don't like the concept of using DataSocket methods for all sorts of unrelated communications methods. I think NI should incorporate all shared variable capabilities into a single consistent API--and there has been some progress here.) What we found is that the most common reason we have to want to do things programmatically is when we want to clone a software component (to make multiple instances of an application). Most often we can accomplish this by renaming a SV library before building an executable. I think the builder should support this in the build script (but it doesn't currently). Another common reason I can think of to want to configure SVs on the fly is to change the logging configuration (resulting in more or less frequent logging rates). One can do this by programmatically creating and configuring SVs using the DSC methods, or one can use the Multiple Variable Editor interface. (An outstanding task for us is to see if we can build the libraries with an executable such that we can edit them while an executable is running, as we can in the development environment.)
  8. The closest you might be able to get with the built-in units is just a frequency: Hz. That might be sufficient, though.
  9. Yes, I think it was about then that NI rewrote networked SVs to use TCP instead of UDP, which dramatically improved their reliability. There have been many further improvements since then. The article for which sachsm included a link actually uses an API that NI introduced in LabVIEW 2009. The networked shared variables aren't perfect now, but they are probably three orders of magnitude better now than they were in 8.2. For anything but the simplest application (and maybe even for that) I think you might save months worth of work by upgrading, if that is any way a possibility. In my mind networked shared variables were almost unusable then, but now we successfully apply them as a major feature in every application.
  10. Yes, we have done this, but it was some time ago so I don't remember the exact details. I think the shared variable engine just used the first network adapter in the network connections list. These links may help: Deploying SVs to a Specific Network Card (a more direct solution if it still applies) Connecting to Ethernet Targets with Multiple Network Cards in the Host
  11. Martin, One thing I have done successfully in a similar type of application is to use the Command Pattern. Ultimately I only need a single controller that receives messages on a single thread (I use a single shared variable), and the controller delegates the individual tasks to the appropriate model (which, in turn, the received command specifies). Paul
  12. I finally decided to write a real application using OO on RT. I am wondering what experiences others who have tried this have had. Maybe we can help each other avoid the same issues? I did write a couple small test projects using OO on RT and found that they worked fine, albeit a little more slowly than the comparable cluster-based implementation (which is what I eventually used for my previous actual subsystem code). Accordingly, I decided to implement the next subsystem in OO on RT, hoping that this would allow me to take advantage of 1) reusable code (common to subsystems), 2) object-oriented patterns (State Pattern, particularly) and 3) ownership of data and methods in a class (to take advantage of OO file management and structure, which makes the code much easier to debug and maintain). I found writing the code wasn't markedly different from working on My Computer (not surprisingly). I did encounter a number of bugs with the project and file management, but these weren't huge and probably are platform independent. In the end I had an application that did indeed offer me the advantages I listed above. When I attempted to deploy the code, though, I encountered a number of pitfalls and bugs, some of which I think are major. (I am awaiting resolution from NI on resolutions of some of these issues, so the jury is open on them. Maybe they have something to do with the particular code I wrote, although I can't see why.) One note. A test I ran with a trivial implementation of the State Pattern showed that writing a (featherweight) object to another--at least in this instance--using an accessor method takes > 140 us (on a cRIO 9074). So... if you use a model where you pluck off an object, have it do something, and then put it back in your model (which is a common thing to do!) and you don't have this at a level where this is private data, you can only do about three of these actions each millisecond. In my case I had a class that no longer had methods but was just present for hierarchical organization, but I had to get its member objects with accessor methods (since I did this inside a method belonging to a higher-level class), and I did this a lot. I found it expedient to convert this class to a cluster and performance dramatically improved. So... maybe someone else will find it helpful to think carefully about where to use accessor methods for objects within objects. Anyway, if anyone has tried OO on RT and is willing to share what they have learned (maybe some workarounds for issues encountered), that could be helpful.
  13. I have the vague notion that I've seen a solution for this somewhere, but I can't find it. In any case, I think it deserves its own topic. Sometimes LabVIEW provides a Data Type code and the data Value as a variant. (In particular, this information appears in the Event Data Node for shared variable events.) With this information, one can in principle cast the data back to its original form, but the Variant To Data function does not support the Data Type code, nor does any LabVIEW function (as far as I know). This very definitely should be a native LabVIEW function (at least supporting the native types). (Otherwise, what good is the Data Type code?) Maybe LabVIEW already has this, but I haven't been able to (re)find it.
  14. The methodology we use is consistent with your needs, but it does require a license for the DSC Module. We flatten an object to a string and write the string to a networked shared variable. (We are only logging signals anyway.) The networked shared variable has logging enabled (only doable with the DSC Module) so that LabVIEW logs the string to the Citadel historical database. When we retrieve the data we have to cast the string to the correct class (easy enough). Advantages: NI maintains the shared variables--you don't have to develop a methodology yourself, and they are self-contained so you can separate them from your application. Disadvantages: The DSC Module isn't free. Shared variables, while very good, aren't perfect.
  15. AQ, Yes, this is true, as noted above. I will tackle this issue from a user perspective, though. As a user I want: support for all 13 UML diagrams, a tool compliant with the latest UML standard, a tool with integrated support of version control and many other aspects of software development (ultimately including requirements traceability and testing)--and code generation and code extraction with LabVIEW. I have heard very good things about Endevo's tool (which again, I have not used myself) and I support furthering the development with Endevo to make this a full-featured tool, but as I look on the website as a potential customer, I see that Endevo's UML Modeller supports only 5 diagram types and doesn't promise compatibility with the latest UML standard, nor does it have anywhere near the feature-list that a tool like EA offers (and that I like to use!). (Moreover, EA is much less expensive and can afford to be since Sparx has sold over 150,000 licenses for it.) My point is not that EA is the answer but that I want a better answer than the current solution. Maybe this means working with Sparx. Maybe this means investing more in Endevo's offering. Are we moving in either direction? I'm sure I'm a demanding customer in this regard; but, why shouldn't I have it? :-) Paul
  16. We use the Corporate edition of EA (might as well, since there isn't much difference in price). Yes, EA does do code generation (unfortunately, not for LabVIEW), which is a feature of certain UML tools. We actually do use it to generate Java classes for one of our applications. (You can also generate a model from code, which is something we have not tried.) EA does support much more functionality, as you note. I have the SysML add-in. I don't use it regularly, though, since we haven't committed to SysML. (Philosophically, I think SysML has a lot to offer since everything is in a single model, but I don't expect our project to adopt it wholesale anytime soon.) I also have the DOORS add-in (links to a DOORS requirements management database), which works well enough for our purposes. I think EA has some work to do to make requirements management seriously effective in EA (which is also why I don't use SysML just yet). In particular, we want to show that if we satisfy all the requirements of a summary requirement that requirement should show as satisfied as well (see NI Requirements Gateway for an excellent example) but EA doesn't have any way to do this now. (I put in a feature request.) EA does support a relationship matrix to show which requirements are covered, but I don't think this as implemented is all that helpful. We do export our requirements to a web-based tool, Enterprise Tester, in which we can write and execute test procedures and generate issues in our issue tracker (JIRA--an absolutely fantastic tool). Enterprise Tester is still in its infancy and so rough around the edges but even at that it has been a huge help. EA also supports project estimation and management tools. I think these are actually pretty good, but I don't use them partly because of the issues with handling requirements in the first place and partly because I have more interest in using tools that focus on iterative development. (The plan is to implement the JIRA plug-in, Greenhopper, but I have to learn how to use that first!) Users on the EA forum have said some good things about RaQuest (a third-party plug-in for requirements management). I certainly would like a great requirements management tool that works across our enterprise but I don't think I've found it yet. Users on the EA forum have had good things to say about the ICONIX toolset as well. I have the book, which is pretty well written, but to date our usage of EA differs. I don't think we will adopt the ICONIX methodology as a whole ever, but the book is still worth a read. The core of EA has a lot of features and overall they function quite well. The plug-ins are inexpensive as well. (For comparison, we were interested in an XMI importer-exporter add-in for our previous tool and the cost of the add-in exceeded the original cost of the expensive tool! This functionality is included in the version of EA we already have.) Anyway, most of our work stays solidly within the realm of use case diagrams, deployment diagrams, component diagrams, sequence diagrams, statemachine diagrams, and class diagrams, with the last two being arguably the most used here. The EA core supports all of these quite nicely. Support for version control is quite nice and a huge benefit, and it is quite nice to be able to distribute the projects either natively (readable with EALite), as HTML, or as an rtf document (the last of which we don't do often). EA is fast and (relatively) easy to use, I have found, although of course there was a learning curve much as I experienced the first time I tried to use a spreadsheet many years ago.
  17. It is straightforward to register for shared variable value change events, but only with the DSC Module, unfortunately. The DSC Module includes several functions ("Enable Value Change Notifications", "Request Value Change Notifications" [accepts and array of shared variables], "Cancel Value Change Notifications", and "Disable Value Change Notifications") for this purpose. Connect the resulting event wire to the Dynamic Event Terminal on an event structure. Then the event structure has a "shared variable value change notification" user event. (The Value and Shared Variable are some of the data elements associated with the event.) This page illustrates the basic idea: Creating a Value Change Event for Shared Variables. I was thinking that he command class itself (parent--if the Receivers are all of the same type--or child level) can include any data we want, so this could be a Receiver object. (This means that the command sender knows the intended receiver in this case, which is often undesirable.) I haven't included a Receiver object myself (my applications haven't needed it) but I have included references in the specific commands, where each reference is a unique type, and really this information could be anything. I don't think this is possible without the extension possibilities allowed by OOP! ___ Update: Since I wrote this I found that upcasting the control reference still worked so I can write the references in multiple instances (objects) of the same class, which saves a lot of development time and makes the application very easy to reuse. Reflecting on this thread is what inspired me to take another look at this, so thanks!
  18. I think we read a lot of the same posts! Q: When you talk about adding a Boolean, are you talking about something on the front panel of the client (here I mean message recipient), or something that is part of the original message? You can add the Boolean to the message easily enough, but I don't think that is what you mean. I generally try to keep my controllers from needing inputs not in the message, but where that would be necessary it might be tricky, although possible. The method the recipient performs can vary, though, because you can include the recipient as an object within the message object. Anyway, it sounds like you have found a path to success.
  19. OK, first I have to confess that while I read through all the posts in this thread I didn't read each one in detail. The solution we use here sounds like it meets the needs mentioned in this last post. Caveat: We don't key off event types. We use a single event (for any given message thread) and pass all data (of all types) over this single event. In our case this event is a shared variable event, but other things are possible. All the actual messages inherit from a top-level Command object. When we receive a message (flattened to a string) we unflatten to a Command (parent) object, but then dynamically dispatch on the actual type (object type) on the wire. The actual model code does not need to be in the message object, and it can obtain whatever data is necessary. This is just an implementation of the Command Pattern. This does mean that all listeners to a particular shared variable get all messages on that thread but can easily ignore the ones that don't apply. In our case we do have several lines of communication so a subscriber can subscribe to the topics of interest to that subscriber. (So a thread might contain all messages related to a particular axis or subsystem.) Back to the caveat. We can identify the shared variable on the event, which gives us an option of filtering that way as well. Can you list the key requirements and constraints for this part of your application? I'm not sure whether this solution meets them all or not.
  20. A couple years ago when we were investigating tools we tried reading XMI files generated by one tool with another tool. It didn't work very well at all. At the time there was even a note on OMG's site (I'm pretty sure it was OMG's site--I can't find it now) that noted that XMI interoperability hadn't been as successful as hoped at that point. I do know that there are some tools that will read files from certain other tools, but I'd want to test this before counting on it to work! Note that there are different versions of XMI requirements, so if sharing models between tools is essential you will want to make sure your tools support the version of XMI you need as well as the appropriate version of the UML specification. (The move to UML 2.0 was pretty significant and not all tools discussed in this thread support the latest version of the UML specification.) The selection of an appropriate tool ultimately will depend on what you want to do with it. If you want to make throw-away sketches you need a certain level of functionality. Generating code is at the other end of the spectrum. In between one can make models for documentation and design discussions, perhaps requiring management of the models in version control. I think in all of these the ease of use and flexibility of the tool comes into play, of course.
  21. What is possible (and what I would do) is package the messages as objects. Each message object includes an object that corresponds to the relevant board. Then in a single controller loop (since the controller has common behavior for all boards--depending only on the current state of the board concerned) the controller uses dynamic dispatching for the particular message (command) for the particular board, based on its state. This is an implementation of the Command Pattern and State Pattern together. It isn't obvious how to do this the first time (my experience) but it is really simple, elegant, and powerful once understood.
  22. What we do is write the error code (I32) to a shared variable (I32). Hence the communication is fixed size so we can use RT-FIFO enabled SVs. We keep the error text in a LabVIEW error code database so we can get the error explanation on the PC. (This does mean that we don't get the name of the VI where the error occurred.) We configure the shared variable to log (the error code) to the Citadel database. We only use this for serious errors that we don't anticipate. For warnings or errors we can anticipate (really anything but coding or hardware bugs) we create Booleans (for example, measuredForceOutOfRangeIsTrue), and again write these to Boolean shared variables. Then we can subscribe to this information on a UI front panel (as we can the errorCode), and again we log to the Citadel database. (For appropriate types of warnings and errors we can use the SV alarming capabilities.) What this means in practice is that we almost never see an error code. This works for us, but it may not be the solution you were seeking....
  23. <BR><BR>It seems to me you are trying to implement something with the same goals as the Observer Pattern (a publish-subscribe communication paradigm).<BR><BR>NI provides an example of one way to implement a publish-subscribe system here: <A class=bbc_url title="External link" href="http://zone.ni.com/devzone/cda/epd/p/id/2739" rel="nofollow external">STM</A>.<BR><BR>Shared variables with the DSC module provide this built-in, and with the DSC module you can register for shared variable value change events. (There are lots of other features, too.) If this is an option, it is probably the simplest path to success. We have found it works well. The downside is that you don't have access to the internal workings. The upside is that NI maintains it!<BR><BR>Paul<BR><BR>If you want to implement your own version of the Observer Pattern, I recommend checking out:<BR><A class=bbc_url title="External link" href="http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612/ref=sr_1_1?ie=UTF8&s=books&qid=1261417852&sr=8-1" rel="nofollow external">GoF Design Patterns</A> (fantastic book!)<BR>or<BR><A class=bbc_url title="External link" href="http://www.amazon.com/First-Design-Patterns-Elisabeth-Freeman/dp/0596007124/ref=sr_1_2?ie=UTF8&s=books&qid=1261417852&sr=8-2" rel="nofollow external">Head First Design Patterns</A>.<BR><BR>(Somebody has already figured out how to do it--and do it well!)
  24. I thought I would add a few details to explain our selection.... When I first investigated UML tools about three years ago I looked at a number of tools (certainly not all) that were free to quite expensive. Most of the free tools at the time did not support the (then-current) 2.0 specification. I initially ended up selecting a tool from the same vendor that made the requirements management tool we use. I found that UML tool to be very hard to use (admittedly part of it was probably my lack of experience with a UML tool). The help files didn't even attempt to provide information on common user questions, the product support was poor, and extra modules for the tool were absurdly expensive. We knew about Enterprise Architect and switched to it when we heard good things from some colleagues with another organization. EA quickly proved its value. Not only was it about ten times less expensive than the previous tool we had, it turned out to be really usable! It has a generally well-designed interface and I find that I can use it quite efficiently. It handles the core UML functions quite well. We distribute our models to readers via the free viewer, as HTML, or sometimes in RTFs. EA has version control capabilities that work well with Subversion, and EA's design supports sharing packages between models very effectively. The help files are good, there is an active EA user forum (although it's not independent like LAVA) and the Sparx Systems folks are quite helpful on the forum and via technical support. Updates are easy to install. I can see why EA has won some significant industry awards and has a pretty large following. EA has many additional features that can help with the software development process. Some of these non-core functionalities are more useful than others. One nice thing is that add-ins (some by third party developers) are inexpensive. (SysML, in particular, has so much potential! I think the requirements traceability features need improvement before this can fully take over for a good requirements database tool, though.) In short, EA has become one of my favorite tools. I like it nearly as much as LabVIEW. (Gasp!) By the way, you can find lists of UML tools at the OMG's UML page: http://www.uml.org/#UML2.0. I'm pretty sure Endevo's UML Modeller (UML Modeller 1.2)--already mentioned--is the only tool that supports LabVIEW code generation from diagrams (class diagrams) and diagram (class diagrams, again) generation from LabVIEW code. (EA supports this for other platforms but not LabVIEW. I really want Sparx Systems and National Instruments to work together to make this a reality for LabVIEW.) UML Modeller supports five of the thirteen UML 2.x diagrams (probably the ones I most use). I haven't worked with UML Modeller so I can't comment on its usability. One last comment. In principle a model developed using one tool can be opened in any other tool that meets the same UML standard, since the files are stored in XML-formatted files according to a standard. Unfortunately, the reality is that models are generally not interchangeable between tools (yet).
  25. We use Enterprise Architect Enterprise Architect. It isn't free but it is very inexpensive and provides a lot of bang for the buck. It integrates well with a lot of our other tools. Unfortunately it doesn't have an add-in to generate (or reverse engineer) LabVIEW code.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.