Jump to content

Daklu

Members
  • Posts

    1,824
  • Joined

  • Last visited

  • Days Won

    83

Everything posted by Daklu

  1. QUOTE (JFM @ Sep 23 2008, 11:31 PM) I figured there had to be an easier way... thank you very much.
  2. Here's one that has me scratching my head... I would like to have a string indicator show the contents of a string control and update while the string control is being edited. I've been playing around with it a bit and the 'Value Changed' event doesn't fire until the control loses focus. The 'Key Down' event might work if I tracked keystrokes during editing, but it looks fairly tricky. (And I'm not sure how I would determine initial cursor position or cases where the user changes cursor position via the mouse.) I get the sense I'm making this much more difficult that it needs to be.
  3. QUOTE (crelf) Not that long... maybe 5 or 6 months. QUOTE (crelf) That should be fine. How complex are your classes? Most of them are not that complex at all. It ranges from 5 to maybe 50 (at most) VIs. I have three different class trees with my main top level classes containing the base class of the two other class trees as private data. QUOTE (crelf) Probably not. I agree that the first place to look to speed up your system is to up the RAM, but unless you've got some much going in the background that your virtual RAM is being used extensively, then it probably won't help. Besides, 2Gb is plenty of RAM for general use. That's what I thought. After a little more digging I found one of the svchost processes occasionally takes over the processors and had nearly 12 million page faults. THAT can't be good for performance. It's odd though... Task Manager indicates I never hit 2 GB of memory use, yet it appears things are still being paged to disk. QUOTE (crelf) Oooo - that's a really good point. ...and it should force you to do a reinstall of everything too (see above) Yes, that is a good point that I hadn't really considered. Investing in a nice 10k rpm drive should help things considerably. I think my notebook disk is using an old phonograph motor.
  4. So my Dell Vostro 1500 laptop is struggling while running the Labview dev environment. I suspect data is moving at near light speed which is causing time dialation and means from my perspective everything is veeeeeeerrrrrrrryyyyyy ssssllllloooooowwwww. Something as simple as activating a context menu by right clicking on a wire means I have to wait ~5 seconds for the UI to respond. Trying to enter or exit palette customization mode means I get a good 15+ seconds to get up and stretch. Admittedly I usually have lots of other background processes running simultaneously and the slowdown isn't limited to Labview. It takes a good 5 minutes for my comp to boot and load Outlook. I'm really curious about what real world performance others have experienced when upgrading hardware/os. With that in mind... I know part of the sluggishness is due to dated hardware, but I have seen comments about Labview slowing down when multiple classes are loaded. With better hardware does Labview performance scale similarly to other applications or are there some inherent inefficiencies within Labview source code that cause the dev environment to perform poorly? (My current project has only 8 classes and one lvlib.) Conventional wisdom says the first thing to do to improve system performance is increase RAM. I have 2 GB currently but in XP the usable memory is limited to 3 GB, leading me to think I should move on to Vista to take advantage of a full 4 GB available memory. Is the extra 1 GB going to make much difference? Does LV run in 64-bit OS's, and if so, does it benefit much from the increased memory that is available? Has anyone run into issues with developing applications in Vista and deploying them to XP computers, either as executables or as applications running in the dev environment? (Other than font changes.) Is the Labview dev environment designed to take advantage of multiple processors? My laptop has two cores but still runs like a dog. Does LV itself benefit from quad core processors? (Again, I'm referring to the dev environment, not LV applications I write.)
  5. Is there some special technique required to insert a class cube on a custom palette? I've tried several times but it doesn't seem to be working.
  6. Follow up to the follow up... (Follow up^2?) OpenG doesn't appear to be able to handle putting class hierarchies in .llb files due to name collisions. I tried a build with each class in its own llb and various other schemes but none of them worked. For the time being I'm stuck with using a directory structure build.
  7. Just to follow up and record information for future readers who may have the same problems I did... (Thanks to Ton for pointing me in the right direction.) I don't have the Professional version of VIPM and the limitations of the Community version made it inadequate for my requirements. I ended up spending several hours with the OpenG Builder and figured out how to use it to distribute "released" code to my user.lib. It appears to do everything I need quite well. There are many ways to customize the distribution. If anyone is not using OGB, I highly recommend looking into it. It is much better than Labview's Application Builder. (Though OGB does require Application Builder to compile executables.) I decided to create a single project file for each class inheritance hierarchy. When I make changes to any vis within that project the entire hierarchy is re-released to my user.lib. On the dev side each project file and hierarchy is wholely contained within a directory with a version number for a name. (i.e. \MyDevProjects\Toaster Classes\v1.00\Toasters.lvproj ...) I do NOT include version numbers in the name of any source code vis. When I need to make a minor rev I simply copy the folder and rename it with a new version number. (i.e. \MyDevProjects\Toaster Classes\v1.01\Toasters.lvproj ...) This makes it easier to keep track of what version I'm working on at any given time and allows me to view multiple minor versions simultaneously during dev, as long as the minor versions are in separate projects. Since all the files are wholely contained within that directory structure OR reference vis from user.lib, there are no broken links in the new version. As Ton mentioned, OGB has the ability to append suffixes ("namespaces") to filenames during the build to guarantee uniqueness while at the same time maintaining all the links. I decided on a namespace that includes major version number in the code released to user.lib. (i.e. "MyToasterVI__toasters_v1.vi") If I make a change to a class that breaks backwards compatibility I'll increment the major version. This allows me to use multiple released major versions of the same hierarchy at the same time. Minor versions and bug fixes are automatically integrated into any vis that use that class hierarchy. Minor versions *must* be backwards compatible so I don't anticipate needing to use more than one released minor version at any time. I have 4 class hierarchies and one lvlib that were all linked within my dev directory. Once I figured out how to use OGB it only took me ~6 hours to break them apart, create build files, and relink to the correct files in user.lib. Much less than the 30 hours the previous exercise took. There are a couple things to look out for though: Take the time to figure out where the leaves of your calling tree are and work on those first. When a vi uses a vi/ctl from another hierarchy or library, that hierarchy or library already needs to be distributed to your user.lib. This is obvious once you run into the problem, but may not be obvious when you start. Take the time to think about the long term consequences of your distribution scheme. I decided to distribute my classes to user.lib by maintaining my subdirectory structure instead of as an llb. This would get me up and running quickly while maintaining some level of palette organization. I realized after the fact that once I release a public vi in a certain location I can't move it without creating a maze of destination exceptions. Since I frequently change the locations of source files during dev I'm going to have to go back and change the distribution to an llb and relink all my dev code. I believe this will allow me to arbitrarily change source file locations without affecting the released code in user.lib. I'm also going to release it to \user.lib\_MyTools\... and create custom palette menus for each class. More work up front... easier long term maintenance and usability. Hope this helps someone else through the pain of figuring out how to organize projects.
  8. QUOTE (eaolson) Grr... you can say that again. QUOTE (eaolson) ...using svn-externals... We don't use svn so I'm not familiar with the terminology. What are svn-externals? QUOTE (eaolson) I'm a single developer, am still working the details out, and may not know what I'm doing. Me too, except change "may not" to "certainly does not." QUOTE (Ton) If you had used the exact same name for the library and the VIs you would have no relinking issues. I used to use the same name, but that's what led to all my headaches in the first place. However, based on your comments it looks like the root of my problem is not separating "dev code" and "production code." I'll apologize now for the length of my posts. I need to make sure I fully understand how all this fits together. Here's the workflow I've been using: Open a project that contains my top level application. (In my case it is a .lvlib I will use with Test Stand rather than an executable.) The Toaster, I2C Interface, and I2C Slave class hierarchies I am developing in parallel are listed under Dependencies. All the code is located in my dev folders. Dev folders and classes have version numbers in the names. When I find something that needs to be changed in my class hierarchy, I check out the code, make a change, and check it back in. In most cases I have all the calling vis loaded so unexpected broken links isn't an issue. I do this partly because I (again) don't know a better way and partly because it seems like it will be easier to move the source code to other computers. For this application I'm developing on one computer and running the code on a second computer. When I need to test the code I just open up Beyond Compare and copy it over to the test station. Here's the workflow I think you're describing. I'm thinking through this as I type it up so tell me where I go wrong: Develop each of my classes independently, perhaps by creating a single project for each hierarchy. Dev folders and classes do NOT have version numbers as part of the name. Build my hierarchies to some location other than my dev folders, such as user.lib. If I use a tool that appends a suffix to the class member names, the vis I'm using in my top level application won't interfere with the class members in my dev folders. When I need to make a change to my classes, I do so in the dev folder. I close my top level project and rebuild the class hierarchy to user.lib. Any application that uses those classes automatically uses the most recent version. If I need to make a change to a class that breaks backwards compatibility, but I also need to use the previous version in projects... Use the same dev folder for v2.0 that I did for v1.x. This allows all the links to remain intact during dev. In my Source Dist build specs, use a different suffix for v2.0 vis than I did for v1.x vis. This allows both versions to be on the palette and used in applications at the same time. (Optional) Send the Source Dist to a separate folder in user.lib. The versions will have different names so it's not required, but nice to keep the code separated. I can only have one version of the class in my dev folder at a time, so if I need to modify v1.x I check in v2.0 and check out v1.x. Assuming I have all that correct, I have a couple questions. (Big surprise.) Is it best to wrap a single class hierarchy in a project and distribute it as a whole rather than have each class in a project and distribute them individually? My gut says yes, but I need to ask anyway. When developing my classes, should any vis that are not part of my class hierarchy be taken from user.lib? I'm specifically thinking about my Toaster child classes, which include (as opposed to inherits) the I2C Interface base class. I assume I should be using those versions from the palette as they will remain updated whenever I build the class source code. If I want to update the Toaster child classes to use I2C Interface v2.0, I need manually update all the I2C Interface v1.x vis/controls/typedefs the Toaster child class uses, since the v2.0 vis have different filenames, correct? On the other hand, the Toast Master class inherits from (as opposed to includes) the Toaster base class, meaning I should inherit from and use those Toaster vis in my dev folder rather than user.lib while developing the Toast Master class? Otherwise I'm developing a child class using a previous build of the Toaster class instead of the Toaster class currently being developed. If another developer (hypothetical, as I'm currently the only developer here) needed to do some work on my top level application, what's the best way to make sure he also gets the correct versions of the classes in vi.lib? What's the best way to change my class names and vi locations (to remove the version number from the directory name) without completely trashing the work I've done so far? (AQ did address changing the name below...) QUOTE (Ton) PS Fixing classes inside OpenG builder would fix VIPM, so if you have time join us. Uh... don't you want developers who actually know what they're doing?
  9. The _Web Document Tool.vi is password protected in 8.5. Are there other ways to access the private vi server call?
  10. QUOTE (Aristos Queue @ Sep 3 2008, 09:52 AM) Let me respond to your points in reverse... b) ".detpurroc" naht esu ot drow retteb a ylbaborp si "nekorB" Wait, that's not what I meant... b) "Broken" is probably a better word to use than "corrupted." Although when I can't remove classes or vis from a project because LV can't find them I'd call it "really, really broken." (In the past I've had mixed results resolving this by creating an empty class of the correct name just so LV would be able to find something to load. Then I'd remove it from the project.) I realize renaming or relocating code modules is generally a bad idea. All too frequently I find myself in situations where I need to do one or both of those to maintain code organization. Chalk it up to my inexperience and trying to develop reasonable universal directory structures without fully understanding what the final code hierarchy will look like. For example, part of my initial directory structure (which is mirrored in SCC) was \v1.0\Toasters\Base Class\..., \v1.0\Toasters\Toast Master\..., etc. Eventually I decided it made more sense to use a \Toasters\Base Class\v1.0\..., etc directory structure. Undoubtedly the majority of the 30 hrs I spent was fixing issues I created when trying to move or rename code using methods that ultimately didn't work, although some of it is due to untimely crashes. (LV 8.5 crashes *every time* I try to move a folder using the "Move on Disk..." context menu.) The other situation I run across frequently is when I am refactoring code during development. When I have a vi I am no longer using I'll prefix the name with "DEP" to indicate it is depreciated. Opening a project with all the code that uses the vi works, it's not necessarily easy to do. If I'm working on my Toaster hierarchy and need to change something, I need to dig through all our SCC to find everything that uses those classes. I've missed things more than once and grind my teeth while spending the next several hours fixing broken links. a) There are a couple reasons I put version number in the name... The primary reason is I don't know any better. I used the shotgun approach to figuring out a solution and this is the first one I found. It allows me to load multiple versions in the same project which was key for me to avoid cross-linking and broken links. (i.e. Different vis in a project can use different version of the code.) It makes it easier for me to tell which version I'm using when coding client vis. Context help shows the version as part of the name. I decided on this convention after development had already started and I can't decrement the version number of a user-defined class. That said, I'm not locked into keeping version numbers as part of the name. If there are better ways to handle it I'm all ears. Regarding copying the entire hierarchy and working on the copy, this raises a question on the SCC model to use. (Keep in mind my local directory structure mirrors the SCC directory structure.) As far as I know there are a couple SCC models that are commonly used: Single Trunk - Released code is always stored in the same location. Dev code is branched off the trunk and merged when completed. Parallel Trunks - Each version gets its own trunk. When a new version is released the previous trunk is killed and a new trunk created. I migrated towards a parallel model because it seemed to ease my problems with broken links at the cost of lots of replacing when upgrading a client vi to a new module version. You seem to be suggesting the single trunk model is more appropriate? In general, do I only need to create copies of the code module I'm changing or should I create a copy of the entire hierarchy? For example, if I'm refactoring my Toast Master class, do best practices dictate copying the entire Toaster hierarchy or is making a copy of just the Toast Master class adequate? (Please don't say I should make a copy of all the vis in the project.) How do you go about integrating the new test code into your application? You can't load the new Toast Master class into the project without manually replacing all instances of the old Toaster Master class members in the main application. QUOTE (Ton @ Sep 3 2008, 10:24 AM) I'm absolutely against putting versions into any filename. You should use the build in versioning capabilities of classes, xcontrols and libraries. If you put the version into the library name you have to relink upon upgrade. To beat cross-linking it is better to have a building process in which the source is named different from the code you use in other projects. There are several ways to do so. One of them is VIPM or OpenG builder. Ton My cross-linking issues occur in the dev environment. Those solutions seem to be geared towards building executables. Am I missing something?
  11. Does anyone have a simple and effective way to manage versioning labview classes and libraries? I've finally figured out something that seems to work (although it took me ~30 hours of fighting broken links, missing classes, etc.) but it is fairly cumbersome. The basic idea is that I include a version number in the name of module (class/lvlib). In my toaster hierarchy example I might have an "I2C Slave Base v0.90.lvclass" contained in a folder with the same name and version number. If I'm going to branch code to try something new I copy the module folder and rename the module (and folder) with a new version number. ("I2C Slave Base v0.91.lvclass") I've found this works well--I can add the new module to a project without any cross-linking issues. There are a couple scenarios that still present some difficulty. My toaster child classes all use Slave Base v0.90 as a private data member. If I want to upgrade Toast Magic to use Slave Base v0.91 obviously I have to replace the class cube in the private data. What's less obvious is that I also have to go through all the Toast Magic vis and replace any exposed Slave Base controls/typedefs/constants Toast Magic uses. This also applies to having a child class inherit from a new base class version. I wish Labview were smart enough to realize that class controls/typedefs/constants used in the client vi should be taken from the actual class being used. This process is much more cumbersome if I'm upgrading a lvlib instead of a class. Since the library vis don't have a dynamic input like class vis, I have to go through and replace every single control/typedef and library vi with the correlating item from the new library version. (I've uttered many curses for Labview not having a more customizable dev environment. Would it be that hard to enable more hotkey options? Replace With...?) Even doing something as simple as renaming a module creates all sorts of linking problems. If I'm working in Toast Magic and rename I2C Slave Base to I2C Slave Base v0.90, the next time I open Super Toast 2000 or 3000 the entire project is corrupted as it can't find the class name it is looking for and, contrary to the behavior when looking for missing vis, you must select a class with the same name. (This has bit me countless times... you think I'd learn not to do that.) This issue is the primary reason I started using the above convention. Are there better/easier ways to deal with module versioning? (I do use SCC but still haven't figured out the best way to manage projects with it.)
  12. QUOTE (Omar Mussa) Naw... I think I'll code for a month and then try it out after I'm done. It's much more exciting that way. :laugh: Actually, since the TestStand Gods (one of which sounds remarkably like Rick Francis) indicated I should bundle up my VIs into larger elements, passing variants becomes a moot point in my case. (Thanks to His prophet AQ for bringing His attention to my questions. What is the penance for my sin of too small TestStand elements?)
  13. QUOTE (Omar Mussa) That's almost what I did. I flattened it to a string and just pass the string from step to step. (I am lucky in that I wasn't planning on having TestStand do anything with the classes.) QUOTE (Omar Mussa) TestStand can't even properly pass Variant data to LabVIEW... A workaround exists but it kindof sucks I just ran into this problem today. Is the workaround flattening the variant to a string? QUOTE (Omar Mussa) Well, no offense but the lesson learned is PROTOTYPE before you dive in too far (a month is WAY too far). If you can't get a prototype to work, you need to find a solution or redesign -- tech support or forums are helpful for that too. None taken. You've heard of biting off more than you can chew? With this project I can't even get a bite... I have to swallow the whole thing in a single gulp! There are a couple reasons why I didn't prototype the TestStand part of this project before developing code: TestStand came to the project late. It wasn't part of the original design and only surfaced within the last two weeks. The local rep indicated that he had never tried LVOOP with TestStand but he expected it would work. I've been a bit overwhelmed by the scale of the project and the amount of new information I've had to learn. 6 weeks ago I had done pretty much nothing with LVOOP, NI-Motion, or TestStand. "Drinking from a firehose" doesn't begin to describe it! I'm thinking sticking my head at the base of Niagra Falls is a more apt description. To avoid drowning I put off the TestStand part, expecting that it would work. To be honest, wrapping my class methods in a flatten/unflatten vi isn't that big of a deal. I am wondering if I'm trying to use TestStand with Labview modules that are too small. (i.e. Don't do enough) Maybe I should bundle them into larger components.
  14. Silly non-Labview-specific question: How does one go about using Labview classes in TestStand? I designed and implemented an entire class heirarchy with the idea of using TestStand to glue it all together. I plunked my first VI using a class and discovered TestStand doesn't know what a class is. Any idea how to pass an object from one VI to the next? (I sure hope the last months of work hasn't been wasted.)
  15. I've been doing some programming with NI Motion and came across a VI that appears to have a minor bug. Most of the VIs in the package use enums to input axes or vector spaces, but one vi appears to have forgotten to pick up the enum on his way out the door. Instead of the normal "Axis or Vector Space" input, Load Move Constraint.flx has an "Axis or Coordinate ID" input. That input has an I32 strict typedef associated with it, compared to a U16 with the normal enum input. I've come up empty searching help for "Coordinate ID" so I assume it's a bug. I'm guess I could set the I32 to the value of the equivalent enum... haven't tried it yet though. (A few of the camming VIs use "Axis ID" in a similar manner.)
  16. I understand now. The benefit to passing the UsrEventRef to the dynamic vi is that it can carry the data back out to the main vi instead of having to do weird things like getting references to controls. (Sometimes it takes a while for good ideas to penetrate my thick skull...) Purely hypothetical question... If I didn't need any data from the dynamic vi would there still be reason to pass the UsrEventRef to the dynamic vi? Or would it make sense to simplify the main vi by containing the UsrEventRef entirely in the dynamic vi and passing in the EventRegRef instead?
  17. QUOTE (Yair @ Jul 25 2008, 02:48 AM) You didn't specify whether you meant the User Event Refnum or the Event Registration Refnum, so I'll assume you meant the User Event, since I do pass the Registration Refnum to the dynamic vi. If you did mean the Registration Refnum, then... we agree! Why have the main vi pass the event refnum to the dynamic vi? The main vi isn't concerned about the event refnum, only that the event will (or will not) fire and that the correct event structure (if there are more than one) is handling it. This can all be handled with the registration refnum. Forcing the main vi to unnecessarily handle the user refnum just creates more bookkeeping for it. Is it just a personal preference or am I missing something technical?
  18. Thanks to everyone's help, I have an example "Timer" implementation that does most of what I want it to do. I can't read data from the dynamic vi until it stops running, but that's okay. I can start/stop the timer and register/unregister for the event independently. I haven't quite wrapped my brain around the difference between an Event Registration Refnum and User Event Refnum, other than the User Event Refnum can carry data with it. Is the idea that an event structure can have only a single Registration Refnum associated with it, but each Registration Refnum can have multiple User Events? In other words, the Reg Ref defines which event structure handles the event while the User Ref defines the event that occurred? The other thing that concerns me is that I don't use the Unregister For Events or Destroy User Event prims at all, primarily because I haven't figured out where to put them yet. I think repeatedly hitting the Register button will create a memory lake. I'll have to play around with that a bit more.
  19. Thanks for coming out of the shadows to share that Scott. (Keep up that awesome posting rate! You'll hit 1000 sometime in 2408! ) I did figure out how to use Norm's demo but the sheer number of vis left me puzzling over exactly how it is working and how to implement it. I think that's the route I'm going to try first. As a bonus your demo will help me understand Norm's better. Couple questions: Is there any particular reason you create the user event and generate it in the same vi? Since I am implementing this in a class is there anything preventing me from exposing a public Register Event vi and have a private Generate Event vi? Rather than have Main.vi register for the event at program start, I need to have the ability to Register and Unregister at runtime. Are there any gotchas if I were to implement it this way? QUOTE My Stop vi wraps a functional global and sets it to TRUE. The dynamic vi checks the functional global after every iteration. I think doing it this way will help avoid a race, but I can see how I may drop a data point or two at the end. QUOTE That's exactly what the Wait on Notification node does: monitors a notification. Yes, but somebody still has to monitor the notification to see when it occurs. So wouldn't I either have to check for the notification with every iteration of my main processing loop or have a separate loop dedicated to checking the notifier? Note the dynamic vi will be started and stopped several times during a test and Main.vi needs to continue running in parallel to the data collection. (Although I am looking at user events as my primary path I'm still learning a lot from the notification discussion.)
  20. Your responses are great! They are getting fairly detailed so let me provide a bit more context to my problem... I have a Toaster class that reads data from a serial port. The methods I have (among others) to retrieve data are Start, Stop, and Read Buffer. Start will spawn a vi that continuously captures data and puts it in an array. Stop (obviously) stops the data capture process. Read Buffer retrieves the data that is in the buffer. For these methods I don't think I'll need a notifier as the main vi tells the spawned vi to stop. I also have "Read n" and "Read t" methods that captures n data packets and captures packets for t time respectively. These would benefit from the notifier I think. QUOTE Two questions: For this to work wouldn't I need to have a separate loop in the main vi dedicated to waiting for the notifier? Since my implementation is in a class, I don't have a main vi in my class to monitor the notification from the dynamic vi. How does that change things? I suppose ultimately I will have a main vi somewhere. I could create an "Enable Notification" method that accepts the notifier as an input. QUOTE The problem with that is the indicator won't be updated until the dynamic VI finishes and you're not exactly sure when that will be. So Get Control Value may return the initial value or the final value depending on when you call it. Isn't this resolved by putting the indicator in the data collection loop where it is continuously updated? QUOTE I believe I remember both Sciware GOOP and Tomi's OpenG LVOOP by-ref wrappers having actor VIs which communicate back this way. I'd love to learn how to use those packages, but right now using advanced addons such as those will create support issues in the long term. Although I suppose it does lend itself to a certain amount of job security...
  21. QUOTE (Norm Kirchner @ Jul 23 2008, 08:03 AM) Yikes! I downloaded the demo you posted... I'm not even smart enough to figure out what it's supposed to do, much less code something like that. Even if I could figure it out, our "developers" aren't knowledgeable enough to maintain or modify an application using that kind of infrastructure. (I have a hard enough time trying to get people to use Labview projects... A "state machine?" What's that?)
  22. QUOTE (normandinf @ Jul 22 2008, 05:34 PM) Thanks for the tips Norm. If I understood Christina's blog correctly the reason my mock-up works is because the front panel is opened while the VI is running. In my real code it will not be open so it would have shut down immediately. I would have been pulling my hair out over that one. QUOTE You can use the method "Get Control Value" to recuperate a certain value from your dynamically loaded VI. Wire the label name of your control/indicator to get a Variant of its current value. After pondering it a bit more I think I'm going to load a reference to the dynamic vi at program start and maintain it until program shut down. There should only be a single instance of the vi at any time so I think doing that will simplify things a bit, plus save me some time by not opening and closing a reference several times. To do this I'd wire FALSE into Auto-Dispose and use Close Reference when my main app shuts down, correct? Let me make sure I understand the implications of your suggestion... I use "Get Control Value" in my main vi with a reference to the spawned vi wired into it. Since my main app doesn't know when the spawned vi stops, it will need to poll Get Control Value until it sees the values are no longer changing. (Or I could just put a "Stopped?" boolean indicator on the connection pane.) Tangential question: Suppose the spawned vi was storing data in a buffer and I wanted to periodically read and empty the buffer. I don't suppose there's a way for the spawned vi to know when it's indicator has been read? (Maybe I could use a "Trigger" control on the spawned vi that would place the collected data on a buffer indicator. I don't have LV right in front of me but I assume there is a "Set Control Value" prim?)
  23. I have an application that needs to spawn a vi in a parallel process, have it stop when triggered, and return the last value from the spawned vi. (This is for the Start Toast and Stop Toast methods in my Toaster Base class.) I created a mock up to help figure this out. The first part wasn't too hard. I implemented the second part by having the spawned vi check the status of a Func Global. I'm not sure how to do the last part though. I've seen a few mentions of weakly typed references, strongly typed references, and call by reference nodes on the forums and in Labview help, but I haven't found anything that points me down the path of putting it all together. Clues? Hints?
  24. QUOTE (MikaelH @ Jul 20 2008, 09:57 PM) Thanks for the response Mikael. I ended up implementing most of the structure over the weekend and did almost exactly what you suggested. The only difference is that I didn't create a Toaster I2C and Toaster Serial class. Instead I just made the I2C Slave class a member of the Toaster classes that use I2C and I'll implement the serial protocol directly in the Toast All class. I think this will work okay. The only problem I can see right now is how to configure the I2C interfaces and Toasters. You suggested a pop-up configure dialog box in the other thread so I'll probably go that route. http://lavag.org/old_files/monthly_07_2008/post-7603-1216650634.gif' target="_blank">
  25. Earlier I had a question regarding class design that MikaelH kindly answered. Now I have a related, but more complex problem. I have four "toasters" I need to test: Toast Magic, Super Toast 2k, Super Toast 3k, and Toast All. Three of the toasters communicate to the host computer via an I2C interface while one, the Toast All, communicates via a USB/serial UART. That creates an issue in that the TA toaster has a very different communication process. The I2C toasters continually check the state of the toast and update their internal registers to reflect the current state. The TA toaster continually checks the toast state and sends it out to the serial port. If the UART has been initialized each toast state is stored in a buffer that will be dumped all at once at my next read, which may not be for quite some time. Furthermore, there are potentially three different devices we will use for I2C communication. All can do basic read and writes, but each has it's own unique feature set. The 8451 can operate only in master mode, the Aardvark can do master or slave as well monitor the bus (in case the blender wants to talk to the food processor), and the Corelis do everything the Aardvark can as well as fiddle with low level bit timing. Here's a UML diagram of what I'm thinking of doing. (I don't really know UML, so some of the notation might be incorrect.) I've also attached a copy of the Visio diagram if anyone wants to modify it. Explanation: Each I2C interface class inherits from an abstract I2C Interface Base class. Each toaster class inherits from an abstract Toaster Base class, which inherits from an abstract I2C Slave Base class. The I2C Slave Base class contains the I2C Interface Base class as part of its private data. Whew! Enough with the problem description. On to the questions... The I2C Slave Base class contains the I2C Interface Base class, but the Interface Base class is abstract and should not be instantiated. Does this cause problems? Currently I have the Toast All toaster inheriting from the I2C Slave Base class even though it is not an I2C device. My thought is to simply override the I2C read and write methods and have them use the serial port instead. On the other hand, having a serial device inherit from an I2C class seems... wrong... like I'm violating the rules of OOP. It would be possible to put the Interface Base class in the Toaster Base class or even iin the Toaster classes themselves, but I get the sense this will create more problems that I can't quite grasp right now. Also, the I2C Slave Base class will be used in other applications and other devices. The Interface Base class seems like it belongs in the Slave Base class. Comments? Suggestions? The three I2C interface devices have different capabilities. What's the best way to expose them? One method is to take the union of all the functions and put them in the base class. The child classes could then implement those that apply to it and ignore the others. Another method is implement each unique function in the child class, but I'm not sure how accessable those functions will be to the Toaster classes. And what happens if I drop a Corelis bit timing vi on the diagram and at runtime the interface is an Aardvark? Related to Q3, I think I need a Clear Buffer method for the Toast All toaster. Should I put that in the Toaster Base class even though it is meaningless to the other toasters? Each toaster has very different abilities when it comes to configurations. I don't even know how to approach this... I believe they all require a write to a register to change config (except for the Toast All, which has no configurable settings) so I could simply expose the Write methods from the I2C Interface Base class and try to detect the object type at runtime so the right commands are sent. That feels very clunky though. Ideas? (Now that I've typed this out it seems like itis another variation of Q3.) What's the significance of class names being bolded on a UML diagram? Some are, some are not. I can't figure out why. (Looking over my diagram again I see I'm going to need to wrap the I2C Interface Base methods and expose them in the I2C Slave Base class. If nothing else I can say the time I spent learning what little UML I know was well spent. It has really helped me think this through.) Any and all comments are welcome!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.