-
Posts
1,824 -
Joined
-
Last visited
-
Days Won
83
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Daklu
-
QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) I fully agree. The question we are discussing is 'what is going to replace it?' QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) ...mouse is a amazing device, like you said. Believe it or not, I'm not all that fond of mice. Or more specifically, I'm not all that fond of the way many desktop applications require constant switching between the mouse and keyboard. It wastes time. QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) I don't know much about eye tracking. But are you sure it is less challenging than touchscreen. No, eye tracking is more challenging than touchscreens. The point is that touchscreens have a practical limit on how good they can be. That limit is defined by the way users interact with it. Even if a touchscreen were infinitely accurate and infinitely fast, you still have the fundamental problems that users can't touch accurately and moving your entire arm takes more energy than moving your fingers. QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) Yes, fingers are inaccurate than a little mouse. But nobody ever said we can only use fingers. How about laser beams? The original question and my response referred to touchscreens. There are countless alternative navigation methods that could be devised. The trick is to find one that offers real advantages over the mouse. How does laser navigation make me more efficient? How does it help me get my job done faster? What are the human limitations? (Try this: Take a laser pointer and hold it about 2 feet away from the computer screen. Now target different UI elements on the screen and see how long it takes before you can consistently hold the beam on the element. Can you hold it on the menus? Can you hold it on toolbar buttons? Can you hold it between the 'll' in the word 'alligator?') QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) I am no expert at all and it's totally stupid imagination. But the coolness factor has always been one of the best motivations. Think about the life around you, all we try to do is to build something cool and make that coolness last for ever. Coolness is an important factor for certain types of products. The Apple iTouch is a perfect example. As an MP3 player its overpriced and underfunctional. Yet people buy it because it is cool. Contrast that with a screwdriver. There are all sorts of things that could be added to make a screwdriver cooler (zebra stripes, neon lights, biometric security, ...) yet you don't see these things in screwdrivers. Why? Because the screwdriver is a tool. Nobody uses a screwdriver just for the sake of using a screwdriver; it's used to accomplish something else. Those added coolness features don't help me screw together two pieces of wood any faster or make it easier to pry the lid off of a paint can. They don't offer any benefit to the user. The mouse is also a tool. It's used to interact with the computer. How often do you sit at your desk with the computer off and move the mouse around, just for the experience? Labview is a tool used to solve other problems. People use it because they can solve those problems easier and faster with Labview than with other programming languages. Computers, for the most part, are used as tools. (Whether or not computers are being used as tools while gaming is debatable.) When people use tools they reach for the one that helps them solve their problem quickly and easily. If you want to replace the mouse as a navigation device, coolness alone isn't going to cut it. QUOTE (menghuihantang @ Mar 25 2009, 06:58 AM) Ok, maybe it's not time yet, but that's not a good reason to stop pursuing. I'm not saying stop trying to make better navigation systems. I'm saying be smart about where you invest your energy. Touchscreens, IMO, are a dead end if you are hoping to replace mice for general computer use.
-
QUOTE (neBulus @ Mar 24 2009, 11:50 AM) Especially the part where I agreed with you, right? :laugh:
-
I worked for 5 years in a group designing and building consumer mice and keyboards. Furthermore, I just recently finished a 2.5 year stint for a company where I did extensive work with capacitive touch technology, including touch screens. I don't claim to be an expert in the field, but I've interacted with usability studies enough to learn a few things. If you want to move Labview programming into a new paradigm, such as touch screen programming, you have to offer the user a tangible benefit. Once the coolness factor wears off, what advantage does touch give the user? In terms of speed and accuracy for most users, nothing beats a mouse. Part of that is simply because that's what users are used to. Part of it is because the mouse works extremely well as part of the complete, closed-loop cursor positioning system. Your brain decides where to move the cursor and feeds inputs to your hand. As your hand moves the mouse, your eye tracks the cursor and your brain sends small error correction commands to your hand, allowing you to get to get the cursor to the target very quickly. Capacitive touch as a technology cannot compete with the speed and accuracy of a mouse. There is simply too much noise in the system. You typically don't see the noise at the user level because of the extensive filtering taking place under the hood. Of course, that filtering comes at a cost--reduced response time. Everybody's favorite capacitive touch screen, the iPhone, was lagging roughly 100 ms last time I checked. (The effect is most easily observed if you have an application that displays a dot at the location the sensor is reporting and move your finger around the screen quickly.) 100 ms isn't much in absolute terms, but that much of a delay can sure throw off the user's cursor positioning system. Touch screens suffer from an additional problem that has nothing to do with the technology. Quite simply, users are not accurate with their finger. There are several reasons for this. The main reason is that the finger blocks the target, making it impossible for the user to make those fine tuning adjustments that put the cursor exactly where they want. Another contributing factor is that a user's finger 'footprint' will be different depending on the finger's angle when it makes contact with the touchscreen and depending on how much pressure the user applies. It is very difficult for users to repeatedly hit small targets when using their finger on a capacitive touchscreen. As a general rule, at best you can expect users to be accurate with their touches to ~5mm of their target position. Well designed touch screen interfaces won't have any UI elements less than ~10mm. Imagine trying to hook up a front panel terminal or selecting a wire on a block diagram using a touch screen. There are ergonomic issues too. Programming Labview via a touchscreen requires large arm motions. It is much more inefficient in terms of both time and energy than using a mouse. "Labview shoulder" would be the new repetitive stress injury. (This is also why I think the user interface from Minority Report is misguided. Too many grand motions are required to get the work done.) IMO, Ben is right; the next big breakthough in cusor positioning is going to come from eye tracking. It's the only thing currently on the horizon that offers advantages over mice. There are huge technical and usability hurdles to overcome, but whoever develops and patents a good consumer-level eye tracking systems is going to make a bundle of money. (I actually tried, unsuccessfully, to generate interest in researching eye-tracking systems during my time developing mice.) QUOTE Essentially you're talking about the arrow keys on the keyboard. Personally, I use them occasionally for positioning selected items on the screen, but if I'm going to have to use them just to get enough accuracy to select an item, I'm going to get real irritated real fast. QUOTE You can also have a keyboard on the touchscreen just like a real physical one. Actually, you can't. You can make the layout the same, but that's about it. Touchscreens, being flat, don't have the same ergonomics or tactile feedback that real keyboards do. The tactile feedback is extremely important for touch typers. The little bumps on the 'F' and 'J' key helps me make sure my hands are positioned correctly without having to look down every time I move them. The curvature of each key provides the subtle hints that keep each finger on the correct key. If I go to hit the 'O' key, and my finger is off a little bit, I subconciously note the different feeling and my brain corrects for it the next time. Touchscreen keyboards don't provide either of these, which is why people can type faster on real keyboards than touchscreen keyboards. QUOTE You will find out your programming speed is much much faster and more fun. Two things in particular make programming fun for me: Learning how to do new things. (i.e. LVOOP, XControls, etc.) Developing software that makes mechanical things move. (Because that never loses the coolness factor. ) (More generally, this could be considered developing software that makes me or someone else more productive.) Fighting with the development environment's user interface is NOT something that falls into my "fun" category. (Ask me how much fun I'm having the next time Labview crashes.) A proposal that makes either of those two items easier is good; if it makes them harder it is bad. Programming Labview via a touchscreen, IMO, falls squarely on the "bad" side of the equation.
-
I had never heard of Duck typing. As I reading the article I was thinking, "this sounds like dynamic interfaces." Something like that could be very useful. Labview seems to be pretty rigid as far as typing goes. I understand that makes it easier for inexperienced programmers to build functional code, but it also leads to frustrations when trying to implement more advanced functionality. In general I think Labview places too many constraints on developers. I'm with you as far as testing private VIs. I can't think of any reason why developers should intentionally be prevented from testing private members if they determine doing so is useful. (Although there may be technical reasons under the hood that make that unfeasable.) It appears most of the posters agree. If I were to pursue it more I'd try writing something that changes all the VIs to public before starting the test and then changing them back when the test is complete.
-
QUOTE Short answer: Yes. Longer answer: Your description doesn't include enough information for me to give you any specific advice. What hardware are you using to create the output signals? Serial port? GPIO module? Motor controller board? I assume you are sending the signals to a motor driver built into your hardware? Just curious, are you using stepper motors or DC motors with encoders?
-
QUOTE (bsvingen @ Mar 3 2009, 11:52 PM) Not at all. The programming was spot on. It was the design that was a bit fishy. (FWIW, I'm constantly reminding myself not to put too much functionality in my class vis.) QUOTE Did you try the child classes on the original? I used LV8.21 and there was no way. Yep, that's first thing I did (after changing the class wires, labelling the class banner, and putting names on the vi icons) was wire up a child object to the shift register. That's what the screenshot is. I am using LV 8.6 though. QUOTE LV even reported an internal error when closing down and starting up again. Nothing terribly remarkable about that. QUOTE ...though he's got a dynamic dispatch input on "Test Data.vi"... Good catch. I started by playing around with overriding Test Data in the child classes before deciding it was easier to just do it all in the SSSM class. It appears I forgot to change the input. (I wish there were an easy way to tell if inputs are dynamic dispatch by looking at the context help window.)
-
QUOTE (bsvingen @ Mar 3 2009, 04:32 PM) It's a logic error in your program. All three child classes use 'Set and Test.vi' within their 'State.vi.' The parent class does not. Therefore, when you start the program with the parent class, SSSM:State is executed, bypassing 'Set and Test.' Neither the data nor the object on the wire ever change. You can't simply add 'Set and Test' to SSSM:State since your child classes make calls to that vi. You need to refactor your VIs so all instances of 'State.vi' do essentially the same thing--return the current data. There are lots of way you could refactor this... I've include one way in the attached file. QUOTE The strange thing is that I cannot simply use a child class, the wire will be broken. I have to make a vi with a parent output that is wired to a child class inside. When looking at the probes it is evident that there are actually three kind of classes. I had no problem hooking up any of the child classes. Download File:post-7603-1236145110.zip
-
QUOTE (crelf @ Mar 3 2009, 01:31 PM) Well not anymore... :laugh: I could have at the time though. Why? Koo koo kachoo
-
Had this little beaut pop up while stepping through a BD a few minutes ago. Locked up LV hard. Hadn't seen it before and since I have a few minutes while LV acks my three-finger salute I figured I'd share. The magical mystery error is blowing my BD away Blowing my BD away, BD away
-
A class within a class - breaking the private data boundary
Daklu replied to crelf's topic in Object-Oriented Programming
I had thought Labview automatically saved the object's state when writing to disk and would restore the object to that state when loaded. I can't say I've ever tried it though and seeing you ask the question makes me think I was wrong... -
Rebinding the 'Enter' key on Quick Drop?
Daklu replied to Daklu's topic in Development Environment (IDE)
QUOTE (neBulus @ Feb 27 2009, 05:57 AM) Whaddya know... it works! Two things threw me off here: I'm so used to needing to do some other keyboard action to 'confirm' auto-complete guesses in other applications it never even occurred to me to try clicking on the BD. The Quick Drop box looks an awful lot like a regular dialog box, which as a general rule requires some sort of user interaction to dismiss. Hence my normal habit of double clicking on the list box or hitting the enter key. (Although I always thought it was odd there wasn't an 'OK' button there.) Thanks for the tip! -
I've been forcing myself to use quick drop recently and am learning to like it... except for one small thing that's been annoying me. I almost always get the correct function highlighted in the top edit box within 2-3 keystrokes, even if the main listbox still has dozens of entries in it. What I'm finding irritating is to select the function in the edit box I need to either find it in the list box and double click on it (a relatively slow process) or hit the 'Enter' key, which requires me to move one of my hands away from their regular coding positions. (Left hand on ASDF, right hand on mouse.) Is there a way to bind another key to perform the 'Enter' key function? I'd like to use the Tab key to accept the auto-complete guess, but other options such as Ctrl-Space or even Shift-Space would work for me as well.
-
QUOTE (benjaminhysell @ Feb 26 2009, 01:57 PM) I recently converted to packages exclusively to distribute reuse code--mostly to myself, but occasionally to others as well. Managing versions and keeping all computers updated with the latest releases is much easier with VIPM than with copy-and-paste. On the JKI forum Jim mentioned including the parent class as part of the project. Personally I prefer to remove it from the project once I set up the inheritance. The parent class is still available in the Dependencies section of Project Explorer, but the separation makes it clear to me that I should not be editing the parent class' source code. As near as I can tell there's no practical difference between the two methods; it's just a matter of personal preference.
-
QUOTE (Aristos Queue @ Feb 20 2009, 02:43 PM) Actually I was thinking, "I hope this isn't a totally stupid question that everybody except me knows about." QUOTE We were hard pressed for a good way to document when a control was set to a non-default default value, but we did think it was important to indicate in some way. Agreed, although using non-default values on a control that has hidden values seems to violate much of what Labview programming is about. I'm still trying to think of a valid use case for this particular trick.
-
QUOTE (Aristos Queue @ Feb 20 2009, 11:20 AM) Since you can't set class values on the front panel like you can with most controls, how would change the default value of a class control? I tried changing the default values of the class but as expected all the front panel controls were updated to the new default value.
-
QUOTE (Justin Goeres @ Feb 20 2009, 09:25 AM) Odd, especially since that class inherits directly from Labview Object and has no children. I dropped another class cube from my Project onto the FP and it looked normal. Labview crashed when I probed the wire and tried to run the vi. I guess it meant Labview was confused. :laugh: When I restarted Labview all was back to normal... (In fairness, previously I had been mucking around with the inheritance of many classes in my project.)
-
What does the dark border in the class cube indicate? It changed when I renamed the class. I've saved and mass compiled the project and the error window is clean.
-
Inconsistent naming for auto-generated class member vis
Daklu replied to Daklu's topic in Object-Oriented Programming
QUOTE (Aristos Queue @ Feb 16 2009, 07:08 AM) Is there a way for users to search known bugs? A couple searches on NI's website didn't turn up anything helpful. -
Question about implementing a Delegation Pattern
Daklu replied to Daklu's topic in Object-Oriented Programming
QUOTE (jdunham @ Feb 15 2009, 10:40 PM) Good article. The recent discussions on the importance of good specification documents also relate to this problem. QUOTE I thought about your issue some, but I think there's not enough information for anyone to give you a sensible reply. Not that you didn't give it a good try, but I think if it were easy, you would have solved it on your own, and no one else can know what the really hard parts are. I appreciate you taking the time to think about it. Being essentially a single developer learning Labview by trial-and-error means I turn to LAVA as a primary resource when I run into problems. Sometimes I can state the problem clearly and concisely; sometimes the scope is broad enough I end up throwing chum in the water to see what surfaces. In this particular case my bait was no good. Reading my posts might leave others puzzled, but at least taking the time to type out a post describing the problem nearly always helps me understand it better. QUOTE Like when you said "The current architecture is also limited in that multiple connections to a device are not allowed. I could not control the panel via I2C and simultaneously monitor the microwave's serial output at the same time." It sounds like your architecture is at fault, so then you should fix it, but it's not clear whether you meant that. Is the problem that your class's private data doesn't have the right information, or do you need to add some kind of locking mechanism to your I/O methods? The root problem is that the new requirements violate the assumptions used to create the original architecture. The original design assumed each control panel will always use a single type of communication. For example, we always use I2C communication to talk to the TouchMagic control panel. To give the TouchMagic child class the ability to talk to the hardware it contains an I2C Interface Base Class object as private data. Now I'm faced not only with supporting multiple communication methods for each device but the possibility of multiple simultaneous communication methods. (Such as sending data to the control panel via I2C and reading the response through the microwave's debug serial port.) One implementation is to create multiple TouchMagic child classes with each one containing a different combination of communication methods. (I2C only, serial uart only, I2C write-serial read, etc.) Following that path will quickly lead to unmaintainable code. There are other short-term hacks I could (and have) implement to provide the immediate functionality required, but it's a path that will get ugly quickly. Since communication methods are subject to change and the communication hardware classes don't have a common ancestor class, I started looking at interfaces/delegates. Unfortunately there are not many examples of implementing delegates in Labview and I'm uncertain about what the tradeoffs are in implementing them. ("Chum, meet water.") [Note - Read the end of the post before you spend any significant time thinking about this.] QUOTE OK, you also wrote "What do I do when two independent Interfaces are competing for the same hardware, such as if the IDIO Device and II2C Device both use the same Aardvark? One could change the hardware settings and put the device in a state the other doesn't expect. I think the solution lay somewhere in the Aardvark Class implementation, but I haven't put my finger on it yet. (Maybe a "Lock Settings" property?)". It seems like you should use a mutex, which in LabVIEW is called a semaphore (near the queue palette, and at some point recently they were rewritten to use LV queues). My comment was a very poorly worded last minute addition that followed a lengthy line of thought I didn't lay out. Forget I ever mentioned it. (But yes, the semaphore is the solution to that immediate question.) QUOTE Maybe you should try to hire a local LabVIEW consultant (obviously you'd need a really good one) just to bounce your ideas off of for a day or so. Sometimes this can be hard to explain to your boss, but it's worth a try. Been trying for over a year. Not going to happen. As a matter of fact, I just found out my manager is taking the development responsibility for this test system away from me and turning it over to a software test tools team to port the whole thing to c/c++/c#. :headbang: Due to my pointy-hair my questions have become largely academic, but I'll continue researching an answer anyway. -
surprise when 2 VIs have the same password
Daklu replied to Antoine Chalons's topic in Development Environment (IDE)
One workaround is to leave the source code unprotected and apply a password when you build the code. OpenG Builder has an option that sets a random password at build time. -
When creating class member vis using using Right Click -> New..., the naming convention of the class input and output in the automatically generated member vi is not consistent. New -> VI from Dynamic Dispatch Template and New -> VI from Static Dispatch Template use the class' localized name as set in the class properties dialog box but automatically removes the .lvclass extention. New -> VI for Data Member Access appears to use the class' filename.
-
Question about implementing a Delegation Pattern
Daklu replied to Daklu's topic in Object-Oriented Programming
QUOTE (jdunham @ Feb 13 2009, 01:48 PM) Oh I understood the intended message. I just chose to look at the story from a different perspective. -
Question about implementing a Delegation Pattern
Daklu replied to Daklu's topic in Object-Oriented Programming
QUOTE (Phillip Brooks @ Feb 10 2009, 03:41 AM) That was funny, but I can't help but have sympathy for the software developer. That advisor was in a no-win situation the moment the king found the toaster. Nobody in their right mind would ask the electrical engineer to "just tweak it so I can cook a ham and cheese omlet" after he shows them the prototype. It's plain by looking at the device that it's not suited for that task and would require extensive redesign. Software developers aren't so fortunate. It's impossible for a person to get an intuitive understanding of the capabilities and limitations of the software prototype by glancing at the finished product, so they ask for unreasonable requests and expect them to be easily implemented. Eight years ago I was writing test software for an R&D engineer. Every time I sent him updated software, the following morning he was at my desk saying, "Your software is really good, but could you just change it so..." Lather, rinse, repeat. I left that job. Even software developers are guilty of it, as evidenced by the suggestions (mine included) on LAVA for ways to implement Labview fixes. Moral of the story: Recognize that when your king finds his toaster your goose is cooked. As an aside, I still haven't figured out how to address my problem. -
Is name mangling still needed when building code?
Daklu replied to Daklu's topic in Application Design & Architecture
QUOTE (Aristos Queue @ Feb 11 2009, 02:40 PM) Magic? QUOTE Suppose you have two copies of X.lvlib, each of which contain Y.vi. When you load the second X.lvlib, according to your scenario, we would put that library in a new namespace Temp:X.lvlib. Now you load Alpha.vi, which calls X.lvlib:Y.vi. It is going to call the original X.lvlib:Y.vi, regardless of which one it was expecting to invoke because that's the one that got the name. Yep, bad oversight on my part. I hadn't thought about how to link the block diagram subvi with the runtime instance of the subvi. That's probably why I'm a test engineer instead of a design engineer--breaking things is easier (and far more entertaining) than building them. My original thought had been for developers to apply namespaces on an 'as needed' basis on their dev computer. That would maximize malleability and avoid the overhead (both in terms of processing time and human management) associated with designing and maintaining a complete namespace structure. In retrospect it's obvious that doesn't work. The deployed application might try to reference a pre-installed shared library that, even though it is the same library, has a old namespace. The app, finding mismatched namespaces, determines the library it needs isn't present and throws an error. Following that line of thought leads me to believe a fixed, centralized namespace lookup table is required. (Although if the table is generated at runtime it would provide more flexibility.) The question that naturally comes to mind next is: What's the point of implementing namespaces if you can't change them without breaking applications? Have the problems of changing filenames simply been shifted to the realm of namespaces? I actually thought about this a lot today and think namespaces are still advantageous. Naturally a library has a 1-to-1 relationship with its filename. Is there any reason a library couldn't have a 1-to-many relationship with namespaces? I believe allowing a library to be accessed in source code through multiple namespaces could help provide a migration path for Ping/Pong situations. For convenience, let's assume the default namespace for libraries is <User>. When we realize Austin's <User.Ping> conflicts with our <User.Ping>, we call them up and decide they will use the <Austin> namespace and we will use the <Seattle> namespace. We each add the namespace to our Ping.lvlib source code and redistribute the built code to our developers. Now their library can be accessed via the <User.Ping> and the <Austin.Ping> namespaces. Similarly our library can be accessed via the <User.Ping> and the <Seattle.Ping> namespace. In the application we are building that uses both libraries we reference them through their new namespaces, <Seattle.Ping> and <Austin.Ping>. Legacy applications which use only our library continue to reference it through <User.Ping>. Using this scheme we can continue to make updates to our Ping library and all legacy applications will remain functional. Our legacy applications can be updated to reference <Seattle.Ping> instead of <User.Ping> at our convenience. There is a significant limitation that this technique doesn't fix. An application cannot reference either library via <User.Ping> if both libraries are present on the system, unless one of them drops out of the <User> namespace. That puts some restrictions on having multiple applications using those libraries installed on a single computer. Still a net positive in my mind, even if not as easy to use as I originally imagined. QUOTE Actually, no. Some things are logically impossible: A general hash function that never produces collisions. An identity hash would do it. It has limited practical value (to say the least) but it is logically possible. QUOTE Sorting data faster than O(n*log n). Given prior information about the distribution of the data to be sorted algorithms can sometimes be constructed that operate faster than that. It also depends on the type of sorting being done. I can sort my dirty laundry in O(n). (Whites, darks, and colors.) Oddly, when I'm tired my laundry sorting algorithm improves to O(1). Special cases? Yep, but logically possible. QUOTE Preventing name collisions among a system where anyone can contribute a name without a central database of names. Yeah, okay... you got me there. -
Is name mangling still needed when building code?
Daklu replied to Daklu's topic in Application Design & Architecture
The Ping/Pong scenario is contrived, but the issue it illustrates is real enough to me. This is the real situation I'm in right now. I work at MegaBucks, a large company spread across several organizational divisions and dispersed geographically. In my group I was the first one to start using Labview. I created several reuse libraries to solve immediate problems and facilitate future code. Over time a few other people in my group started using Labview and used my reuse libraries. A little more time passed and I discovered a group of Labview users in a different division working on similar problems. I have since found a few other islands of Labview in various locations. When I first created those reuse libraries I gave them names that were descriptive enough for the context in which I anticipated using them. Robot.lvclass, ButtonPusher.lvclass, DataAnalysis.lvlib, etc. Unfortunately these names are too general to be meaningful when the code is shared with the other Labview users. In the best case the purpose of my reuse code is somewhat obscured in their project; worst case is a name collision with one of their reuse libraries, which incurs the developer overhead I mentioned above. Either way sharing reusable code is hindered. Maybe I should have given my libraries more descriptive and unique names. After all, Robot.lvclass is an extremely general name and a good developer should anticipate a name collision somewhere down the road. I suppose the name could have described what the robot does rather than what it is. Except I work in a development test environment; equipment is frequently repurposed to accomplish new tasks. My control panel testing robot today can easily become a glue dispensing robot tomorrow. I could use NI's convention of prefixing the library with an abbreviation and call it MB_Robot.lvclass. Oops, the other groups might use that convention too, so I'll have to extend that convention to include the organizational structure: MB_ConsumerProducts_HouseholdApplicances_MicrowaveOvens_Test_Robot.lvclass. At least that accurately describes what device the code is intended for. Umm, no thanks. Not only is the name unwieldy, but when the Consumer Products division is dissolved in our biannual company reorganization the name is no longer meaningful. Maybe I should give the robot an internal code name and use that as the library name, Robot_Homer.lvclass. The problem is that in my group the robot has been known simply as "the robot." I might call it Homer but the name doesn't have any meaning outside the context of my brain. People get irritated because they view it as unnecessarily complicating discussions. (I've tried it.) As it turns out, I did essentially ensure a unique name by using OGB name mangling to append an acronymn of my group's informal name. Of course, the acronym is meaningless to everyone except me and it is now firmly embedded in legacy code. ----------------- In it's simplist form the namespace could simply be string property attached to a library. When Labview attempts to load a library with the same name as a library already in memory, it compares the namespace of the library in memory with the namespace of the library on disk. If they are the same then the libraries are the same; different namespaces = different libraries. The namespace itself doesn't have to be used anywhere except when loading identically named libraries. (One of my favorite sayings is, "If the solution seems simple I don't know enough about the problem." Obviously there are many design considerations in play I know nothing about. I don't mean to suggest the solution is this simple; I'm using this as an example to further illustrate my concept of namespaces in Labview.)