Jump to content

PaulL

Members
  • Posts

    544
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by PaulL

  1. A client (the Context) calls one or more methods on an interface State class -- this is an implementation of a Signal trigger. We can also have a trigger based on an internal evaluation of some variable (Change trigger). Each trigger may result in a response: this could be a state transition, which itself may one or more effects, or it may trigger an update on an ongoing activity. The implementations inside the various state classes in the hierarchy determine what happens for that state. Yes, they can and frequently do call their parent methods so that common behaviors do not need to be implemented multiple times. (Note that the actual work is delegated to the Model class.) The UML defines entry, exit, and do methods on a state in a state machine diagram. Entry methods are executed upon transitioning to a class, exit methods when exiting a class, and do methods while remaining in the state. (A 'do' method is often called in a loop and executes in a short time. In fact, the system completes every action in a very short time, since the responsiveness of the system is defined by this time. Hence we break up long-duration activities into many short actions. The topic here is "run-to-completion.") The UML also defines self and internal transitions on states. On a self transition, the system leaves the state (executing its exit methods) and reenters the same state (executing its entry methods). On an internal transition, the system does not leave the state so it does not execute the exit and entry methods (just an appropriate do method). Where reflection comes into play: In the "Challenging" slide there is a transition (marked D to E, but really the trigger would have some other name). If we use the entry/exit method approach, this would execute D.exit(), then E.entry(). For the transition D to F, this would execute D.exit(), then B.exit(), then C.entry(), then F.entry(). In other words, the entry and exit methods execute until they reach the least common ancestor (LCA). In particular in neither transition illustrated do the A.exit() and A.entry() methods execute. The LCA cannot be determined at run-time without reflection (or using more cumbersome, but effective, approaches such as trial and error tests to map the hierarchy). (In the absence of reflection we invoke the behaviors in the transition code, so do this and that and the other thing, then change state. This works but it does require repeating some code here and there, which has implications for robustness and maintainability. This isn't a major problem, but reflection would definitely make this better.
  2. This is the State Pattern, not my idea. (There is a link to my full presentation in this thread https://lavag.org/topic/17937-state-machine/#comment-107773.) Your inclination is correct, in that the States are usually flyweight objects. (This is the case in our implementation.) The hierarchical relationship allows for override and extension, so that each state has only the code it needs. This facilitates a clear and robust implementation of a true state machine, without repeating behaviors. For instance EnabledState in our implementation processes an error trigger for it and all its substates. In general, debugging is easy because behaviors are precisely specified and occur in only the appropriate place in the code. The State Pattern is a key part of our solutions.
  3. The attached file extracts the relevant slides from a presentation I made at NI Week presentation 2012 that presents just such a case. (I left the slide on Transition Execution Sequence at the beginning of this selection because of the reference.) The hierarchy can be determined at compile time and stored (not so robust), or at run-time using a couple somewhat complex approaches (adding code to the classes to derive the hierarchy, or attempting casting and see if it fails). Reflection would provide a solution, I think. (The hierarchy could still be determined once at the beginning of program execution.) TS8237_Lotz_ReflectionExtraction.pptx
  4. There is the following statement in the KB (Blurry Icon Editor Linux Machine): "This will change the fonts for all of the text within your LabVIEW VIs and you will only be able to use the default font with this configuration token in place." If UseXftFonts=False in the labview.config file, we cannot even change the size of the font. The font size is too large to use in the icon editor. If I set UseXftFonts=True in the labview.config file, I can once again change the size of the (blurry) font. At the moment, once I exit the icon editor, the text on the final icon is not as blurry as it is in the editor. So I guess I will have to stick with this in the absence of a better solution.
  5. That is super helpful! I'm not sure how my search didn't find that. I tried it and it works with the LabVIEW Application font. I have to figure out how to get back to Small Fonts since once I selected something else it is no longer in the list. Many thanks!
  6. In addition to the Small Fonts, we have tried the LabVIEW Application font and the other similar fonts (System font, etc.), and many of the other fonts that appear in the list. None of these worked. What do other people working on Linux use?
  7. Out of the box text in the icon editor looks awful. (See attached image, which is better looking than most.) (Yes, even with small fonts: https://forums.ni.com/t5/Linux-Users/Labview-Icons-under-GNOME/gpm-p/3379530.) Details: LabVIEW 2016 64-bit, CentOS 7 Linux OS We have tried many things to get this to work, to no avail. Does anyone have a solution?
  8. When we first deployed object-oriented applications (using by-value objects) on RT targets (quite a few years ago now) we encountered long build times (10 or 15 minutes) that were not repeatable (repeated deployment using the same build specification was successful only a small fraction of the time). This situation was unworkable. We learned that the problems we encountered were due to the tangle of relationships between elements (Rolf's "similar stuff" above). Consequently, we implemented interfaces in the manner I have described elsewhere to reduce interdependencies between elements. Builds since then have been reliable and quite quick. We use objects for all our RT applications. (Caveat: There is one specific issue we encountered and we strategically avoid that.)
  9. I contacted NI Support about this. NI Support created a related post on the Idea Exchange: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Unit-Test-Framework-Support-on-LabVIEW-for-Linux/idc-p/3801989#M38826. If this interests you, please consider adding a comment or a vote.
  10. Nothing. So do we conclude no LabVIEW for Linux customers are doing unit testing, then?
  11. So, what choices, realistically, do we have for writing unit tests for LabVIEW for Linux? https://forums.ni.com/t5/Unit-Testing-Group/Unit-Testing-tools-in-Linux/gpm-p/3396916 (In 2014 NI was going to evaluate the level of effort to make the UTF available on Linux.) https://github.com/JKISoftware/JKI-VI-Tester/wiki (May be available in the next release). https://github.com/JKISoftware/Caraya (I guess this might be available on Linux?) Are there any other known options? In our case we need to support integration with Jenkins. I don't think that will be the problem.
  12. My recent experience has been managing software engineering teams building complex systems of components (especially large telescopes). In addition to LabVIEW, I have tremendous expertise in modeling (SysML and UML), design patterns. systems engineering, and project management. You may have read some of my papers on LAVA! I can help your organization build complex, robust systems quickly! res Jul 2017 without address.docx
  13. Join us in constructing the Large Synoptic Survey Telescope (http://lsst.org/). We are looking for a talented LSST Software Engineer: https://rn11.ultipro.com/spa1004B/JobBoard/JobDetails.aspx?__ID=*AB6A19BE44FB88D0. Requisition Number 15-0182 Post Date 9/21/2015 Title Software Engineer City Tucson State AZ Description The Association of Universities for Research in Astronomy, Inc. (AURA) operates several observatory centers (including the National Optical Astronomy Observatory, the National Solar Observatory, Large Synoptic Survey Telescope (LSST) and the Gemini Observatory) in the United States and Chile under cooperative agreements with the National Science Foundation. The LSST Project has begun construction of a large ground based observatory in Chile. The 8.4-meter LSST will survey the entire visible sky deeply in multiple colors every week with its three-billion pixel digital camera, probing the mysteries of Dark Matter and Dark Energy, and opening a movie-like window on objects that change or move rapidly: exploding supernovae, potentially hazardous near-Earth asteroids, and distant Kuiper Belt Objects. The LSST Telescope and Site Group (T&S) is looking for a Software Engineer to complete the design, implementation, and verification of software components necessary for the LSST survey mission. This position will engage in all phases of software development including: requirements elicitation, detailed design, implementation, and verification. The Software Engineer will be responsible for delivery of high quality end products including: requirements, designs, deployed control and user applications, and user manuals in a timely fashion. Typical components to be delivered include those to handle dome enclosure control, active optics, and instrument control. This position may be responsible for managing out-sourced contracts. This position is located in Tucson, Arizona at LSST Project Office and relocation to Tucson is expected. Essential Functions: This position will work closely with the T&S Lead Software Manager and with other members of the software team. The Software Engineer will be responsible for: For software component, in coordination with customers and software team: Elicits software requirements, showing traceability to higher-level requirements. Participates in selection of input and output electronic devices, where appropriate. Prepares component structural model. Creates behavioral model showing system triggers and states. Develops detailed design class model. Implements design in source code. Builds and deploys application. At each step, participates in review of element with team. Participates with team in reviews of other components. Provides component development plan inputs and reports progress against plan. Communicates constructively with customers and fellow team members to ensure successful realization of project needs. Requirements Bachelor’s degree in computer science, mathematics, engineering, or physical science. Master's degree preferred. Effective verbal and written communication skills. Analytical and problem solving ability. Attention to detail and commitment to achieving high quality results on time. At least two years of software experience. Experience with hardware control applications and real-time operating systems. Demonstrated working experience in National Instruments LabVIEW (including Real Time and FPGA applications). Knowledge of LabVIEW Object-Oriented Programming is a plus. Demonstrated working experience in MATLAB, Excel Visual Basic, C code. Experience working with version control systems. Experience with unit testing. Ability to function in an unstructured and dynamic work environment. Desired Experience/Skills/Abilities: Experience in the Unified Modeling Language (UML) and Systems Modeling Language (SysML). Especially advantageous is expertise with Sparx Systems Enterprise Architect. Skill in object-oriented analysis and design. Knowledge of object-oriented design patterns is a plus. Experience working with issue tracking and management software, especially Atlassian JIRA. Working experience in wiki publishing, especially Atlassian Confluence. Ability to work in iterative development cycles. Experience working with publish-subscribe protocols, especially Data Distribution Service (DDS). Knowledge of C and other comparable languages used for real-time work. Expertise in Java or Python. Experience in contract management Experience working on telescopes or similarly complex systems. Ability to learn and apply new skills. Applications will be accepted until the position is filled. All complete applications received by November 1, 2015 will be given full consideration. Please list 3 professional references in your application. Please attach a statement of professional interests or cover letter and CV or resume (PDF Files preferred) to your application. Please name any attachments with the following format: 15-0182LastnameDocname. Application documents that are not uploaded as part of the application may be sent to employment@aura-astronomy.org As an Equal Opportunity and Affirmative Action Employer, AURA does not discriminate because of race, sex, color, age, religion, national origin, sexual orientation, gender identity, lawful political affiliations, veteran status, disability, and/or any other legally protected status under applicable federal, state, and local equal opportunity laws. Preference granted to qualified Native Americans living on or near the Tohono O'odham reservation. We are an Equal Opportunity Employer. Please view Equal Employment Opportunity Posters provided by OFCCP here. Apply On-line Send This Job to a Friend
  14. You are correct. The particular implementation of the Factory Method Pattern I show here does include references for all available objects. This is because: 1) Most of the factories my teams build are for applications that actually use all the object types available. (This is certainly true for the State Pattern and Command Pattern, not necessarily true for the Strategy Pattern.) (Of course, I only include the objects that are relevant for the specific application.) 2) We want to maximize performance, so we want the state objects to be in memory. (Again, this is typically more important for the State Pattern and Command Pattern than the Strategy Pattern). 3) The objects are flyweight or nearly so. Any external references are simple (one layer deep, to interfaces if applicable). 4) We want to make implementation simple, foolproof, and readable. Reasons to choose an implementation to support some sort of dynamic loading, as the VIShots example does, would be to: 1) Support plug-ins (which may be compelling, but is nontrivial in practice). 2) Avoid loading unneeded objects, as you suggest. It would be simple enough to create a concrete implementation of the CookBehaviorFactory:createCookBehavior() method to use the Get LV Class Default Value.vi instead. Paul
  15. Fair enough. My perspective is that we need to equip a broader range of developers with an understanding of interfaces, etc., so that they can use project libraries effectively. Look, I think the loading-of-dependencies approach is consistent (the loading of parent libraries, maybe less so) with LabVIEW's overall linking model, and I think there would be trade-offs if LabVIEW were to depart from that model. I think it is straightforward to work with that model, in such a way that project libraries are helpful (in certain circumstances), especially when coupled with interfaces, so I don't think it is helpful just to bash the project library concept. It is a useful concept! Perhaps proposing an alternative concept that would address competing needs would be more likely to yield results. Keep in mind, though, the LabVIEW IDE's overall approach to loading dependencies. Changing that may be more convenient for some use cases, but it would have to be in a manner such that it would be obvious to a broad range of developers, even me! Are there examples from other development environments that more closely represent what you need?
  16. I understand. Fair enough. I think we all agree that A and B in drjpowell's second case, which correspond to odoylerules' Hardware and GUI classes, shouldn't be be in the same project library. I think we also agree that pitfalls like these are not immediately obvious to all those who design LabVIEW applications. My experience, once I understood how library dependencies work, was that it was advantageous to use project libraries when needed, putting A and B in separate project libraries as appropriate. Once I knew the rules this was not difficult, and it even drove me to end up with very clean projects with code in easily reusable packaged, namespaced libraries; in my experience it was well worth the effort. This is a good capability to have, but does require careful application.
  17. "I do use labview classes, but i'm very careful to keep them limited and try not to cross reference files unless i know exactly what i'm pulling in." Fantastic! "I had multiple instances of strange cross dependencies loading so much that by the end of the day most of my source code was being loaded to the crio. This was causing major problems since GUI libraries were being loaded as well and were causing failed builds. By the end, i couldn't' figure what was loading what and moved everything out of libraries. This action alone quickly cleared up the issues i was having and was a huge lesson learned." Hmm..., this just means the code in the libraries you managed as assembled into these libraries had unintended dependencies. Sure, pulling items out of libraries can separate things until you get rid of unused dependencies, but then the groupings are obsoleted as well. The better thing is to fix the libraries so they have the proper dependencies. (I would fix--or not use--any library that has improper dependencies, as far as possible.) An easy way to see the dependencies of a library (.lvlib)--or anything else--is to open a new project and drop the library (or other item) alone into the the new project. Then look at the dependencies. My advice is to get used to paying close attention to what is in the dependencies--for everything--not just libraries. This will help you make the code design and architecture better. Again, this isn't a problem with the library concept itself but misapplication of it in specific libraries. It seems clear that this sort of management (what belongs in a project, or in a library, or in a class, and how these link to one another) deserves more attention in training or other materials. Paul
  18. You can find out how to set up Git to work with LabVIEW's diff and merge tools here: https://lavag.org/topic/17934-configuring-git-to-work-with-lvcompare-and-lvmerge/#entry107740.
  19. Well, here are a couple: http://lavag.org/topic/16235-organizing-your-projects-on-disk/ http://lavag.org/topic/15271-lvclasses-in-lvlibs-how-to-organize-things/ There are others but these are the most recent and likely have the better content. I guess I haven't collected my thoughts on this in one easy-to-find place. Sorry!
  20. Your description is quite correct. On the other hand, I don't think this is a problem specific to project libraries. It is a trade-off associated with using templates. I have original elements that I need in a new project, but the elements in the new project must be distinguishable from the original items. I can 1) copy or 2) construct the elements; then project libraries add the capability to keep the element names in a collection or namespace, which is what the project libraries add (and I think can be quite valuable--I even can have both libraries in the same project if necessary). The new elements are images of, but not identical with, the originals. I don't know of a straightforward way to get around that. Copying a template is (generally) relatively simple. A second issue is that some customization (changing namespaces and references) is often necessary. Then there is the much bigger issue of what happens when the template changes? How do we change all the copies? This is also not simple if we are not programmatically constructing the copies, but we mitigate the impact by keeping the namespace-specific elements as thin as possible (so that they have only namespace-specific content) and call code in common code beneath. In other words, the namespace-specific elements are the tips of the iceberg that is the common code. Then changes to the common code can propagate easily across projects; changes to the thin top layer are still problematic, but these can be very few now. Of course, with some sort of scripting other solutions exist, but then we are well beyond the topic of the basics of project libraries. Would I like a straightforward solution to the challenges of using template code (while keeping the advantages of using templates in the first place)? You bet! Paul I guess it is also worth pointing out that methods of abstraction can help the template stay flexible while still remaining thin at the top layer.
  21. All right, I will have to jump in now. I find that .lvlibs do have their purposes. We use them to create collections of, say, typedefs (for namespaces, in particular) and similarly, of course, shared variables (LabVIEW does this), and we create collections of certain types of classes (a set of Command classes or State classes). This is especially convenient since, for example, we often need to include the typedefs from one component in the project for another component. It is convenient to load the entire collection as a unit, and the namespace avoids collision with the other similar collections in the project. I have written on the uses of .lvlibs elsewhere on LAVA. These are contexts where .lvlibs are helpful, but we also carefully use interfaces to avoid the load-everything issues correctly identified in this thread. I certainly do think that managing dependencies is essential when using .lvlibs, but then it is good practice anyway. We also collect tightly connected groups of classes in .lvlibs, say, ones that perform a certain set of functions. Then when we use these functions, the code appears neatly in a library in the dependencies. Again, it is super-important to manage dependencies, but let's recognize their legitimate value. I do agree it is unfortunate that loading even a child .lvlib library loads all the libraries in the library hierarchy. This is not the behavior our UML tool has, on the other hand, which loads only the requested library. I'd prefer the latter approach for LabVIEW as well, but I can see the problem, since some code will be broken without dependencies. I'm not sure what the best way to handle this is. I do agree that .lvlibps need some more work before they will be really usable, as I have also written about on LAVA.
  22. Well, I think refactoring (in an OOP context, at least) is for the most part a reasonably straightforward application of design principles, such as the Single Responsibility Principle (see http://smile.amazon.com/Head-First-Object-Oriented-Analysis-Design/dp/0596008678/ref=sr_1_1?s=books&ie=UTF8&qid=1405959788&sr=1-1&keywords=head+first+object+oriented+analysis+and+design, for one) to achieve certain goals (maintainability, reusability, etc.). I think using a modeling tool (I'm thinking UML here) greatly expedites the process--and I think the modeling shows that the differences between OO languages, at least at some levels, are not so great. (There are essential differences, of course, but I think many classes--at least the lower-level blocks--will have the same basic structure in any OO language. At the level of more complex constructs such as design patterns, there is perhaps a little less commonality, but the principles are still the same. I have not, however, read this particular book of Martin Fowler's. Maybe I should read it....)
  23. James, You have taken an important step in identifying "a couple of areas at a high risk of change/versioning." Some of the the Object-Oriented design principles that are helpful are: "Encapsulate what varies. Code to an interface rather than an implementation. Each class in your application should have only one reason to change." See http://smile.amazon.com/Head-First-Object-Oriented-Analysis-Design/dp/0596008678/ref=sr_1_1?s=books&ie=UTF8&qid=1405959788&sr=1-1&keywords=head+first+object+oriented+analysis+and+design. These principles are behind many of the Gang of Four design patterns. For a change in communication protocol consider the Adapter Pattern. This is probably a good choice for a changing file schema as well. If you want to avoid repeating code, you may put the relevant pieces in common code that your solutions call. For certain types of repetition (an algorithm changes, but the version of the algorithm does not map 1 to 1 to clients, which may be what you are seeing) use the Strategy Pattern. Paul
  24. A related thread: http://lavag.org/topic/16235-organizing-your-projects-on-disk/
  25. Here is how I have this set up. Select Tools...Options... and click on the Diff tab. In the External Diff/Merge area I have External Diff Tool: Custom Diff Command: 'C:/Users/Paul/AppData/Local/Programs/Git/bin/_LVCompareWrapper.sh' Arguments: "$REMOTE" "$LOCAL"
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.