Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/13/2011 in all areas

  1. Hi, This is a small tool that adds a shell menu 'Open with LabVIEW Compatible Version' Very usefull when working with multiple LabVIEW versions. It launch the appropriate LabVIEW Version if installed', otherwise, it prompts the user to select a more recent installed version. Requirements : LabVIEW 8.6.1 RunTime Engine Use : Right Click on a LabVIEW File Select 'Open with LabVIEW Compatible Version' Supported file formats : .vi .vit .ctl .ctt .llb .lvlib .lvproj .lvclass .xnode .xctl LabVIEW Shell Launcher 1.0.4.zip
    1 point
  2. We (NI) revamped the Product Partner and Compatible with LabVIEW programs over the past few years and I wanted to ask the LAVA community for feedback on the support requirements for Compatible with LabVIEW Silver and Gold products. I wasn't sure where to post this topic, so I'm sorry if the Lounge is not the correct forum! Our requirements for this program are for Silver products to have a documented support plan or policy; we want users to have somewhere to go for support for these products! This is a pretty straightforward requirement where we accept pretty much any support offering (email, forums, telephone, etc) as along as it's documented in the product. The purpose of this thread is really intended for feedback on the Gold Product requirements, where we require a documented support policy with a minimum 2-day response time. We're open to companies or organizations having a pay-for-support policy, but it must meet this requirement in order for a product to reach the Gold level. We see a potential issue for free products like the OpenG libraries where they are maintained by a community and no one individual is responsible for support; how can these free products offer a guaranteed 2-day response time on support inquiries? Why is the 2-day response time relevant? Well, we are anticipating these APIs being used by customers in mission critical applications and we don't want these customers to be stuck if a bug or some other issue is found. So, what are your thoughts? Thanks! -RDR NI Partner Program Staff Engineer
    1 point
  3. I appreciate how you are open about this! And I can understand how it's viable to keep your standards high (and pure) for the LVTools Network. Regarding OpenG: As a community member I cannot guarantee that there always will be an appropriate response within 2 days, some of the packages are more obscure than the others. However I think there is a possiblity to use the SourceForge's issue tracker. We should be able to set that issue tracker up in a way, that when an issue (bug/feature request/support request) is raised a certain group (yes, those are volunteers) get a direct notice (email) about the issue. And beyond that there is no better promise than any other (commercial) party can make, but I guess that there is some sort of complaint from a customer that could revoke the Gold status of any package (commercial or community supported). One thing that might 'scare' people is the public nature of these discussions. However I believe in an open communication about bugs etc. But that should be stated clearly at the LVTN page. Regarding other community supported packages that might want to go LVTN: I think it's not possible to keep the community active to keep these packages supported. It needs commitment, if I look at the Code Capture Tool, I sometimes don't have the inspiration to respond to a question (and sometimes I just lack time). However if I look at the community supported packages that I use (Mantis, Mercurial) I have noticed that the response time is very short. For instance a question on the Mercurial Mailing list results in a very fast (and correct) answer by the main developer. It will be though for us to get the same standards running. Regards, Ton
    1 point
  4. For those that are interested more in the wikileaks issue, there is a relative recent documentary on wikileaks. It was produced before the cables, but includes the video 'Collateral Murder' posted above. It also gives some more information about what happend around this video (including wikileaks meeting the family of those killed in the video). You can find it on youtube: http://www.youtube.com/watch?v=lPglX8Bl3Dc Make sure you don't just watch the first part, it's split 5 or 6 (depending on the uploader) versions. I am wondering why it had taken so long for this discussion to arise on the forums. I think it's one of the most significant events for global internet culture, effects of technology on mankind, ... (the stuff we as techies have something to say about). Felix
    1 point
  5. I am happy to announce a new round of videos for the FRC 2011 season. This year, Enable Training and Consulting Inc. has put together something we call The Seven Steps to FRC Robotics Success The first two videos are now up at FRCMastery.com with more to follow soon. See the dropdown menu at the top of the page to LabVIEW for FRC. There’s a link to the 2011 videos there. In Step 1 we will: · Start a new FRC LabVIEW robot project · Get familiar with FRC cRIO robot program structure · Deploy code to the cRIO · Explore the FRC Driver Station In Step 2 we will: · Add a joystick button to momentarily stop drive motors by modifying Teleop SubVI · Examine the LabVIEW Case Structure Plus, all of our material from the 2010 season is still available.
    1 point
  6. I see. I will need to read through the topics but I can say this right away: We use the State Pattern as described by the Gang of Four. In this pattern we would never create a macro or queue of states, although we can and do create macros of triggers (could be commands) that we can invoke serially on the statemachine. Specifying a series of states would be antithtetical to the concept of a statemachine, I think. For instance, working off ShaunR's example, "Load Settings," "Clear Display," and then "Set Window Position" could all be separate commands (triggers) wrapped into an "Init" macro command, but these are not states. An Invoker calls the corresponding methods on the Context, which in turn delegates the operations to the State (so our abstract State would have the methods loadSettings, clearDisplay, and setWindowPosition, and the implementation behavior for these methods would vary between states--which is the point of having a statemachine). One clarification is in order. It is both possible and reasonable for a Context method to invoke several methods on State in sequence (hence decomposing a command into smaller operations). So, we could alternately implement Context:init as State:loadSettings...State:clearDisplay...State:setWindowPosition. (For ease of comprehension in practice we only do this if won't change states along the way--and rarely, at that--but there is really no theoretical reason you couldn't change state at each step.) Even in this case, though, we are assembling a series of operations, not states. Again, I don't thing specifying a sequence of states is in keeping with the concept of a statemachine.
    1 point
  7. Hold down ctrl-shift to bring up the hand tool and use the mouse to move around the block diagram. Much faster than using scroll bars. Why is this bad? Meh... that's not a bad habit; it's a personal preference. Is your hack & check used to figure out what the bug is, or a semi-random guess at fixing the bug without fully understanding it? If the former, no big deal. If the latter... yeah, you might want to rethink that. I agree with Felix. This is probably the change that will benefit you the most. When I have functionality that is a reuse candidate I do several things before adding it to my reuse library: 1. Create a library for the component that exposes that functionality. 2. Decouple the library it from my app code. (Make sure the library isn't dependent on any app-specific code.) 3. Copy the component source to a couple more projects and use it there as well. Only after I've used the component in a couple different projects (which almost always results in some changes to better generalize it) will I add it to my reuse library. -Writing block diagrams that don't fit on one screen -Routing wires behind structures
    1 point
  8. Apparently the trick is to use a tilde (~) after the :. (so says the wiki) Ton
    1 point
  9. AQ's general thoughts on project, library and VI relationships: A VI is a function dedicated to a specific task. It has no particular allegiance to any particular application. It doesn't care who calls it... it does what it does when it is invoked. A Library is a collection of related VIs. Some libraries are dedicated to a particular task: .lvclass libraries are dedicated to defining a new data type. .xctl libraries are dedicated to defining a new control. Libraries are coherent distribution units -- all the VIs therein should distribute together. The VIs that test a library's functionality should not be in the library since you don't generally want to distribute your test harness. Again, libraries have no allegiance to any particular application in LV. A project *is* a particular LV application. There is a one-to-one correspondence between "something I as a user generate to be an end product" and a single .lvproj file. There is some leniency to this one-to-one relationship if a single chunk of VIs are used to build multiple build targets -- perhaps both an exe and a dll, or multiple flavors of source distribution, for example -- but these are just flavors of the same general deliverable. The project should include the test harness for the deliverable. A VI and a library are reusable components. They may be used by multiple projects. If you intend to have a component that you share between multiple projects, then there should be a project for that shared component itself. That project should contain the test harness for that component. The other projects that just use the shared component don't include the test harness for the shared component. In the most extreme of stringent coding practices, the project for the shared component has a Source Distribution build. It is used to generate a separate copy of the source code which will be used by all the other projects. The other projects do not ever reference the original source code of the shared component. This protects against unintended edits, and allows you to always regenerate from the shared component's project a clean version of the shared component. The post above said "that lvlib is a namespace and has a one-to-one pairing with VIs". The Library actually has a one to many relationship with VIs. Each lib contains multiple VIs. You should package together VIs that are related, to the extent that feels reasonable to you. Some libs are very small. The Analysis library that ships from NI has 600+ VIs in it. In the words of Yoda, "Size matters not." It is a question of how related the functionality is and whether the VIs are intended to be distributed/used together. You can use sublibraries to give further breakdown if that is useful. Make VIs private or protected whenever possible. It helps your debugging later if you know that no one else could possibly be calling a given VI. And it makes it easier to change conpanes if you know that no one else is using a given conpane. My personal goal is that all VIs should be owned by either an XControl library or an LVClass library. Plain libraries should only own other libraries for packaging and distribution purposes. This is in accord with the maxim "All bits of data may be thought of as part of some object; all functions may be thought of as actions taken by some object." Do I actually keep to this goal? No. LabVOOP still has inefficiencies in some places, and sometimes I get lazy. But I'm working toward that goal and do believe it is viable. All the VIs for a single project go in a single directory unless they are components intended to be shared among many projects. Within that single directory, I recommend creating subdirectories for each library. I do not recommend that you have any further disk hierarchy. If you create folders within your libraries, so be it, but don't make directories on disk that reflect those folders. Why? Because folders are the organization of your library for presentation as a palette and for providing scope for VIs. It is really annoying to want to change the scope of a VI and have to save it to a new location to keep your disk organization consistent. And it serves no purpose. If the library is a coherent distribution unit, then when you're on disk you're supposed to be moving the entire library as a chunk. Some people complain that that makes it hard to use the "Select a VI..." item in the palette. But I suggest that is what drag-n-drop from the project is for. Use the project window as a pinned palette during development. When you deploy, deploy the library with a .mnu file that is part of the regular palette hierarchy. All of this can feel like overkill for a single developer by him/herself. But I find that these guidelines help avoid cross-linking problems, debug problems and distribution problems even with my own projects that aren't shared with any other G developers. All of the above are my PERSONAL thoughts and should not be taken as NI speaking. I am NOT a full or even part time G developer. I do C++ and G is my hobby, so take my advice with a grain of salt. But, on the other hand, I do spend a lot of time talking about and thinking about what the right way to do things in G is, and I know how the internals are designed. At any rate, its your call how valuable you think all this advice is. PS: If your clusters are already typedefs, put the typedef into the project window, popup on it and choose "Convert Contents of Control to Class." This will help you on your way to converting your project over.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.