-
Posts
798 -
Joined
-
Last visited
-
Days Won
14
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by John Lokanis
-
What is the more stable emulator for running Windows version of LabVIEW under OSX: Parallels 6 or VMWARE Fusion 3? thanks, -John
-
I have been trying to view the LVOOP Design Pattern document on the NI site for the last few days but cannot get it to load. Does this link work for anyone else? http://decibel.ni.com/content/docs/DOC-2875 I know this is the right link because I have it saved from before and I have found links to the page from other pages discussing LVOOP. It also seems the search functions of NI.COM are down as well. -John
-
Think of the UI as your boss and you as the core. Your boss tells you what to do but can't doing anything himself. You have to do all the work. Therefore, you are decoupled from your boss. He can be fired and a new boss brought in and you will still do the same work. If you are fired, the boss can't get anything done without you! (one of the joys of being a LV dev )
-
How do I set LV exit code or write to stderr?
John Lokanis replied to jmg's topic in LabVIEW General
LabVIEW applications do not run as console apps. But, there are some trick you can play using .NET and Win32 dll calls. Read though this thread: Running LabVIEW as a CONSOLE app -
Daklu - fair enough. I can agree with that. I am trying to architect some code where the UI will be as dumb as possible but still as decoupled as possible. In other words, with you example, the UI will send the file path to the core but the core will do all the error checking on it and send back any error messages to the UI to display. The UI code will only deal with the controls on the FP and other things that need to be displayed. My goal is to eventually move the UI off the computer that the core code is running on, possibly replacing it with a web interface. Also possibly replacing it with a multi client, single server paradigm.
-
That is not my point. A network interface is just an example. The interface could just as easily be a queue. The point is it CAN be sent over a network if you so choose. In other words, the UI and the core engine know nothing about each other's implementation. They just pass data over a <insert generic interface here>. If you are passing references from your UI to your low level code, then you low level code must know a lot about your UI and therefore you are not decoupled. I'm not saying that is a bad design. I do it all the time. I am only saying that it should not be considered a decoupled UI design.
-
Just because you passed all the reference from your FP to a sub vi does not mean you are decoupled. The sub-vis still need to know what is on the FP and how to control it. To be decoupled, you would need your FP to have code behind it that received data from the core engine and then decides how to display it based on your UI's layout. To return to a point I made awhile back, if you UI and core engine cannot be on separate computers on a network, then you are not decoupled.
-
Yes, but only if you app is trying to display data at high speed. If your app is some sort of automation control app, you likely are not doing anything that a 1ms delay could affect. And, in most cases it is less than that. Additionally, there are ways around this by creating a separate dedicated channel for the high speed data and having a separate UI loop handle it.
-
Cat - Web Service support SSH, if you need it. And, there is no reason you need to use them for a decoupled UI. You could just as easily use a queue if you UI and core are both running on the same machine in the same app instance. The benefit of a decoupled UI is: can replace or change without affecting the core code. Can use message channel to automate core (headless). Can use message channel to test core and UI separately. Cna log message channel for field debugging. Downsides are: more code to write. Not as easy to implement, design wise. Might offer slower performance in high speed apps.
-
Maybe she is just watching us fight it out! (or maybe she has some real work to do...)
-
I guess my original point is not getting across. How about thinking about it this way: If you UI can run on one computer and your core engine can run on another computer, with nothing between them other than the network, then your UI is decoupled. I am not saying that the UI portion has no code in it. In fact, I expect it to have a lot of code in it to interpret messages from the core engine then implement the data changes as UI control changes, utilize whatever methods are required. I also expect the UI code to convert user events into messages to tell the core engine what to do. That way, the only connection between the UI and the core is a bidirectional messaging system. Those messages can then travel over whatever link you desire. And you can monitor and log those messages for debugging. You can also inject messages from non-UI actors to automate your core engine and your can do the same to create test simulations for your UI. That is what IMHO a decoupled UI is. You are free to disagree but I hope you at least understand what I am trying to say.
-
Agreed. As I said, I don't do this normally. Just need to look into it for a particular application. I like the idea of having a channel between the UI and core that I can monitor, log and use for test injection. Also, this would make external automation very easy. All these things matter in my application.
-
I'm not saying you should always decouple your UI from the core engine of your code. I'm just saying that if you are passing FP refs to low level code, you are not decoupled. I do this all the time in many application. And I know full well that I am essentially tying these two parts together in a way that would not be easy to separate in the future. So, while this might be the right approach in many cases, it cannot be called 'decoupled' if you do this. But, in my case I want to move to a system where the UI could be replaced without changing the core engine code. And I want to eventually change to a client-server architecture where the core server can manage several client UIs simultaneously. That requires a fully decoupled UI design.
-
If you are passing control refs from your UI FP to your core engine, they you have a strongly coupled UI, not a decoupled UI. The better solution would be to setup a messaging system where the core engine tells the UI what the new data is and then the UI takes that and applies it to the actual controls. That way, you could completely change the UI to display the data in any way you wish without touching the core engine. I'm not saying I don't pass refs in some of my code, but I acknowledge this is a coupling issue I want to avoid in future designs.
-
I would say a truly decoupled UI would be one driven by an API that is not limited to LabVIEW. In other words, using something like Web Services, you could build your UI in LabVIEW or HTML or C++ or any other language and interface to the underlying engine via calls. There are other options to this of course. I think a good first start would be to separate the UI code from the engine code so your low level logic does not access any GUI elements. This could be done by using queues, events or other messages to communicate state changes and user actions between the UI code and the engine. Next, you need to ask yourself what the physical channel will be between the two. If you will be on the same machine in the same app instance, then you can use queues and events. If you plan to be across the network, then some other method will be needed (web services, raw TCP/IP, network streams, share variables) and if your UI is not going to be LabVIEW, then you narrow yourself to web services or something like web sockets (http://www.bergmans.com/downloads.html). Overall, decoupling now can allow you to pursue these options down the road more easily. Lastly, you need to consider if you will build a client server model where multiple UIs can interact with a singe engine simultaneously. That is a harder nut to crack...
-
Best Practices in LabVIEW
John Lokanis replied to John Lokanis's topic in Application Design & Architecture
Any opinion on the NI Unit Test Framework vs the JKI unit test package? (I have one licence for the NI version but need to pick one before getting everyone licensed) -
Best Practices in LabVIEW
John Lokanis replied to John Lokanis's topic in Application Design & Architecture
Thanks for all the feedback. So very good points in there to consider. Regarding PPLs, I currently package my plugins into an LLB using the OpenG builder. I do this to make them distributable. Essentially they become a 'dll' at this point, allowing me to install them on my target machines. The advantage of this is all the VIs used by the plugins are included and name-spaced at the time of the build so that I can never have a conflict with a similarly named file from another LLB or part of the code. The other advantage is I can pull from the same pool of reuse code for all plugins and I only get a snapshot of the VIs at the time of build. The disadvantage is I am using a non-standard build tool (OpenG builder) and I want to separate my source from OBJ (new LV2010 feature) and I don't think that will work with the OpenG builder. I was hoping the PPLs would give me the same functionality that I get from these LLBs. For those of you who have had issues with PPLs, can you give me more details or reference some CARs so I can see if the bugs will affect me? Regarding unit testing, my goal was to apply this to my reuse code. And I have a lackey I plan to utilize to write these. :-) The hope is this will make the reuse libraries more robust and ensures they continue to work as expected since everyone will be using them in their projects. As for re-write vs re-factor, I plan to branch my code and develop the new version by editing the existing code. But, I need to continue to maintain the existing code while this re-factoring takes place, so I will be pulling existing code over from the branch and using it in the new version if it fits. This is not a total rewrite from scratch, all the functionality that exists will remain. Just the methods used to achieve it will be upgraded to more modern and best practices. I don't plan to release any of the new code to production until the re-factor is complete. And just so I am clear, this is not just about changing the code to use OOP but rather about changing the DEV process to project better software in a team environment instead of the lone LabVIEW ranger (me) cranking out code as fast as possible. I want to be a real CLA, not just a CLD on steroids... And finally, I hope the end result is code that my whole team can understand and maintain, not just me. Yes it hurts job security but it allows me to build an efficient software dev environment that I and my team will benefit from for years. I think when I am done, this will make a great case study to present at NI Week 2012. -
Best Practices in LabVIEW
John Lokanis replied to John Lokanis's topic in Application Design & Architecture
Good point. But I don't plan to start from scratch. Just re-factor, reorganize and apply best practices. The original code works great for us but only one person really understand it and can maintain it. We started with best intentions but deadlines forced the usual shortcuts. And fixes and features were added over time but not in the most elegant way since they were not architect-ed in the first place. The goal here is to look at what we ended up with, convert it into a set of requirements and then alter the existing code (and rewrite parts as needed) using best practices and a team of developers. The end goal is a code base everyone understand and can help maintain. Oh, and since the system is working now, we have no deadline, so we can take the appropriate amount of time to get it done right. -
Best Practices in LabVIEW
John Lokanis replied to John Lokanis's topic in Application Design & Architecture
I like this idea. Our company uses Bugzilla. I will have to look into how that can integrate with Perforce and LabVIEW. I am sure the text coders have already sorted this out so I will have to learn from them. -
I have an opportunity to take an existing large project and re-architect it from the ground up. All options are open to me. So, I want to apply all the latest and greatest tools and techniques to build an eco system for my code and my company that will make this project and all others after it easier to maintain. I would like your thoughts on what things work and what are not worth the effort. Some ideas I am considering are: Change the code to use LVOOP design patterns. Organize all my reusable code into distributable packages (VIPM?) that are available in the palettes. Add unit tests to all (or most) of my code. Integrate the Diff and Merge tools into my SCC system (Perforce). Use the GOOP Dev Suite to generate UML from my code (and vice versa) for doc purposes. Use Packed Project Libraries to distribute plugin code instead of LLBs. And on a more technical note specific to my project: decouple the UI from the code. What I mean is instead of the UI being a FP of a VI, implement it as a web page (using the UI builder?) and have that interface to the code via web services. This might be too radical to do right now... So, please let me know what tools and techniques you use that work well in large applications with multiple devs. I want to try to be as state of the art as possible since I doubt I will get another chance to do this again. -John
-
Labview 2010 missing at registration
John Lokanis replied to Biggeveen's topic in Site Feedback & Support
Just bumping this so it get noticed. I tried to update my profile and could not select LV2010 either... -
That would be one heck of a URL string for all 28 inputs. Here is a more elegant solution. In LV2010, you can create web service VIs that will persist beyond the first call in memory (when setting up the VIs as source files, there is an Auxiliary VI setting). Using this technique, you could create several VIs and split the 28 inputs amongst them. These web service VIs would be called first and would store the input values. Then the last one or a later one could be used to 'execute' the function.
-
I just released my 2010 build to production so I should soon have an answer for you. So far, so good.
-
try putting the DLL in the same folder as the EXE or in the data sub folder. This worked for me in past versions. Also, make sure you have the iak file included in your installer.