Jump to content

Mads

Members
  • Posts

    437
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Mads

  1. On the 3D Picture Control\Helpers palette there is a nice VI that is called Sensor mapping. This VI enables you to import a 3D model, place sensors on the model - and then generate a picture with the readings shown as an intensity plot overlayed on the 3D model. This is *almost* what I would like to use in a new application; except for the fact that the sensors should not be treated as points, but lines. It would also be nice to be able to import a wider range of 3D drawing formats (ideally it should be possible to create some simple 3D models with the app too, but that would just be a bonus). Is there anyone who finds such a task familiar? I'm tempted to oursource this part of the coding as we have enough work to do on other parts of the application, but if anyone has any tips or examples on how to e.g. modify the intensity mapping code in the sensor mapping VI that could be enough. Mads
  2. Here's a good site to look at: http://blogs.msdn.com/cjacks/archive/2008/02/05/where-should-i-write-program-data-instead-of-program-files.aspx The discussion on the page highlights some of the problems people have with how things work in Vista/Win7.
  3. The problem with the public application data folder is that only the user that first ran the program will have access to the configuration files the application created under ProgramData. One might say that the data is not really public...ironically enough. So if a second user logs in and you want that user to be able to see the same application configuration and maybe also do things that will affect it - ProgramData will not work unless you change the access properties of the files in programData. There are plenty of web sites that discuss this dilemma, and it's the reason why I suggested this idea on the idea exchange. This is why we normally use the user's application data folder instead if it's a user-run program, but we use the ProgramData folder if it's a service that always runs under the same user. Mads
  4. The PDA module allows us to develop for CE-embedded on tiny SBCs. CE is not really (just) for cell phones...so I really hope they will continue to support that at least. NI is already generating C code and .Net assemblies though so perhaps it's not such a leap to support Win7 Mobile.
  5. I'm looking for a low power single board computer that can run LabVIEW applications and is suitable for integration into a measuring device. Is there anyone who has experience with LabVIEW on small SBCs with e.g. Windows Mobile? The two basic tasks for the SBC is to communicate on RS485 with an external board, do some number crunching on some data from this external board and store the results for later retrieval using TCP or another serial link...The power consumption should at least be less than 6W. We have been looking at sbRIOs or the possibility of using the cards inside a PAC unit..(more compact than sbRIO)...but the form factor is too large and the prices are much higher than e.g. an SBC running Windows Mobile or XPe so it looks like we'll need to find a suitable SBC instead. Mads
  6. Have you tried switching between themes? Perhaps if you set the theme to Classic and then back to Vista it might refresh the controls.
  7. As people here have already mentioned you can still use an OO-like approach: I've made several systems in LV 7 and older where you have multiple units multidropped on different types of communication links. The user configures a set of channels and can add units on each channel. Based on the configuration I instantiate the necessary channel handlers and unit VIs (Open a reentrant reference to the VIs, or use VITs, and run them with the run method without waiting for them to complete...Use a notifier or user event to control their termination, config reload etc.)). The unit "objects" have a generic que based interface to the channel handlers; each channel has an output que and each user of the channel have an input que. The unit configuration tells it what channel it is on, and it uses the channel tag to get a reference to the channel's que. The unit can then send data to the channel using the channel que - and the channel will return data on the unit's reply que when the reply is available (using the tag of the unit embedded in the message it received on it's que). The ques pass clusters that, in addition to the message data and the tag of the user object - contain parameters like timers, expected reply length (if no reply is expected then the channel handler knows that and does not wait for one), errors etc. Unless you make a general plug-in based framework with sub-panel based configuration windows etc. it can be difficult to make the system flexible enough to fit *all* types of protocols and units though. One way I've solved such cases has been to write plug-ins (launched by a general "plugin launcher") that handle certain types of IO outside the standard IO architecture. In many cases these plugins just "translate" the new protocols to one of the "native" ones and hook up to the rest of the system on a standard channel.
  8. The multicolumn listbox has two different modes - it can either be a scalar and tell you which row the user has selected, or, if you allow the user to select multiple rows - an array of the row indexes that are selected. You switch between these modes by right-clicking on the control and selecting "Selection Mode". If you write to the value property you can select rows programmatically, if you read it you read the indexes of the rows currently selected. The active cell is used to choose which cell you will be editing programmatically. If e.g. you want to change the background color of the cell in row 2, column 4 - you create a property node where you first set the active cell to 2,4 - and then (just expand the property node to have another property underneath the active cell property) - write your new color to the background color property... I suggest you show contect help (Ctrl+H) and hover the mouse over the control and its different properties, the help window will teach you how to use it, much better than we can do here. Check out the examples that come with LV as well, Help->Find Example->Search tab->enter listbox as keyword...or try searching on ni.com.
  9. Sure, making VIs that accept all of the related objects is not a problem...but you still need to create the object, and that means that the callers must either deal with that directly (which we want to avoid, they should just hand off a Notifier name and a user event or que ref of any type)...or you need to make one object creator for all the data types and join them into a polymorph VI. The latter makes the new "Notifier with History" limited to the number of types you have VIs for in the polymorph creator VI (another thing we do not want). The primitive notifier on the other hand has an obtain function that will accept ANY data type, not just objects of one of the correct classes. I'll probably make my Notifier with History limited to just strings this time...that will at least give more or less the same behaviour as the old Notifiers/Ques that used to be string-only too.
  10. A side note - but anyway: I did some experiments today just to see how I could implement this "Notify with History", and I played with the idea of using LVOOP for this. For some reason I half-expected to be able to produce unlimited polymorphism this way, ie. in this case the ability to create a notifier of any data type, but to have the top public VIs (obtain notifier, wait on etc.) virtually be the same regardless of the data type of the notifier - but as far as I can see that is still not possible. Even if you limited the number of data types you supported, and had different sub-classes for notifiers of those different data types (to then extract and handle the standard que or event reference that in fact would be at the core of the system) - you would still need an old style polymorph VI to let the callers use the public VIs without them having hard coded the creation of the correct sub-class...right? (Have not used LVOOP much yet...)
  11. Mee too:-) That way getting notifier references by name - and handling different data types, would be as clean and effective as it could be - and it's a functionality that complements what is already available.
  12. User events will give you much of the "feeder"-logic for free - that's a good idea, however you still need a que manager that lets VIs register their feeds or subscribe to a feed ...The manager would have an event reference lookup table. The point of this is that you do not know how many feeds or subscribers you have at compile time, instead the code will allow you to add these dynamically...e.g. when you create plugins to your application that wants access to a data stream. Ideally this could be something that could be packaged with LV...or OpenG, as it would be a very useful design pattern. Daklu - if there is not too much digging required...that would be very interesting to see yes:-) Mads
  13. Notifiers guarantee that all VIs that wait for a notification will be notified, but they do not buffer the notifications (well, you can get the previous one, but otherwise they are lossy). Ques on the other hand have a buffer, but do not guarantee that all VIs will see the incoming data. If one VI deques data from a que - other VIs that wait for data in that que will miss it (making it rather uncommon to have multiple consumers on the same que). Has anyone made a design pattern that combines the "delivery to all subscribers guaranteed"-behaviour of the notifier - with the "delivery of all data" behaviour of the (non-lossy) que? Here is an example of how you would use such a "Notifier with buffer": You have a loop that measures something every now and then. To communicate the readings you have another loop that takes the readings and writes them to a different app using TCP/IP...You implement this as a producer-consumer pattern based on a que....this way the TCP-communication will not need to poll a list of readings and check which ones are new - instead it will sit idle and wait for new data..great. Later however you find that you want to add another function that needs the data. With a regular que this would mean that you would need to modify your producer to actually deliver data to an additional que.... With a "Notifier with buffer" however, you would not need to change the code of the producer at all. You would just register a new subscription to "Notifier name" - and voila, your new code has a lossless stream of the data it needs. The fact that you do not need to edit the producer makes it possible to add such expansions to built applications, if you have given them a plugin architecture. Realising this idea would e.g. be possible by making a Que-manager that pipes data from incoming ques out to multiple subscriber ques. Whenever you have a producer of data - you create a que and give a reference to it to the que manager. If a VI wants to subscribe to the data, it aks the manager, using the name of the que used by the producer, for a reference. The que manager looks up its own list of ques - and when it finds it - it creates an outgoing que and gives the subscriber that reference. It also adds the subscriber to a list it gives to a dynamically created feeder. A feeder VI would be a VI that waits for data from a given incoming que (producer) - and then writes that data to its list of outgoing ques... So far I've only played with this idea..and perhaps I'm overlooking an already existing alternative, or a big flaw?
  14. Absolutely Rolf, if anything I would call using DDE this way now a kind of security through obscurity..I do not blame you for laughing Everyone notices if an application opens a TCP-port, you even get a nice warning from the Windows firewall. Most apps I make are servers (so I do not avoid the warning anyway...) I could have just added the command to the existing set of commands supported by my client-server interface...however it does not seem like a good idea to have such a thing available.If I had used TCP I would put this on a separate TCP port and make sure that port would always stay blocked in the firewall and had some password protection that "only" the launcher would be able to pass by accessing local information. There should be a better option for local communication....I can think of a couple, but neither of them are elegant... Mads
  15. With the events I mentioned - cursor move, window resize etc. you do not need any of the intermediate events, the really important event is the latest - that is the event that tells you where the cursor was moved last, how large thw window is now etc...and previous positions/sizes are of much less interest....so I do not understand why you see significant problems...after all this would only be a setting that you could active for the events it is suitable for.
  16. This goes for all similar events - like the cursor move event, or the window resize event e.g. - the GUI will be affected by how quick the event is handled. This is why I typically just set a flag in the event case and then do the processing of the event somewhere else. The ideal solution would be to be able to set the event structure to have a lossy buffer..i.e. it should only keep the latest unprocessed instance of the event, not stack them up. (It has been proposed on the Idea Exchange already).
  17. Yes, you have a helper executable - that's what I call the "Launcher". When the whole application set is installed it adds entries to the registry that associates certain files with the Launcher...This way the OS runs an instance of the launcher...which in turn runs (if necessary) and transfers the file path to the main application. I have extracted the core of the code and attached it here... I have *not* taken the time to make it into a fully working example as that would take more time than I have right now...but you should be able to pick up the main ideas. OpenG VIs are required... It is about time that we get full support for file associations in LV..so Kudos that idea on the Idea Exchange everyone, please. Mads File Launcher Example.zip
  18. I do it this way in my applications. The loader app gets the file path from the argument generated by the OS due to the registry settings - and uses DDE to transfer the path of the opened file. If the DDE connection fails it assumes that the app is not running yet and starts it...and waits until it gets the DDE link. I do not launch the target app with arguments simply because I want it to work smoothly when opening multiple files: I have set the .ini file of the launcher so that it runs multiple instances (AllowMultipleInstances=True)...that way people can open multiple files in one go. The reason why I chose DDE instead of TCP is because it is less of an open door to the outside world...although I would prefer it if there was indeed a way to esnsure that the communication was only possible locally on the PC (without using the firewall). ActiveX could be an option...as DDE really is a very old technology and might get phased out soon. Mads
  19. I'm waiting for the Developer Suite first....we use VLM as well (does anyone know if we need a new license file and when the upgrade DVDs will arrive? We just received new DVDs that still had 8.6.1...waste.) One question that those of you who have installed it already might know; is it possible to build 32 bit apps in LabVIEW 64bit? We'll probably need to support 32 bit systems for a while, but it would be nice to run the 64 bit version of LV and just have it as an option to build the app as 32bit.
  20. Hey, thanks - I got around the glyph by editing a ring control after all...but then I had not noticed that they can click the glyph as well in VIPM. The part I missed in this case was the "-" though...I tried _ .... Thanks again!
  21. In VIPM there is a GUI element I'm curious as to how they made: the LabVIEW version and Filter by installation status rings - end the search box. They all have glyphs next to the text...and in the former there are separator lines between some of the entries, just like in a menu... One way to get the glyph is to edit a Modern style menu ring and replace it's arrow element...(and replace the rest with system style elements) but the arrow is moved with the right side of the control so that get's messy if you want to rescale the control... Any tips (or a new article on the How did they do that page on the VIPM site:-))?
  22. If all you need to resize on the front panel is one graph then right-click on that graph and set it to scale with the panel. In order to ensure that other controls stick in their correct position and/or relation to eachother you can group the controls and/or control where they stick by placing them in or outside the right quadrant of the scaling frame of the graph. If e.g. you want them to stick to the top of the scaling frame the group of controls need to have at least one element that is above the top of the resizing frame. In cases where you have multiple objects that needs to resize the solution is splitters. You can adjust the thickness of the splitter - if it is only a couple of pixels it will in fact not even show when you are running the VI...It's often OK to have the splitter visible though, tells the user something about how the panel will behave. Mads QUOTE (Michael Malak @ May 27 2009, 12:26 AM)
  23. We used to have a poster at the company where I work that said something like: "For every existing customer you lose you have to recruite 10 new ones to compensate". It may not always be true, but I think it's a very good rule to operate by. One small thing that could improve LabVIEW in this aspect would be to let users choose between a set of user profiles during the installation process (for those of us that have Volume Licenses it would be nice to have an *easy* way to include our own such profiles in the installation sets as well), that way we would not need to repeatedly deal with the "Express mode" that they have chosen to be the default. QUOTE (hooovahh @ Apr 16 2009, 02:37 PM)
  24. The MGI VIs are in fact just as slow as the OpenG Config file VIs on my PAC units. The OpenG VIs have their main weakness in the use of a recursive call. The time consuming part in the MGI Read VI is the "Get Type Info.vi"...which has a locked diagram so I am unable to investigate it any further... QUOTE (Yen @ May 27 2008, 07:05 PM)
  25. Mads

    FFT resolution

    As previously mentioned the resolution is decided by Sample rate / number of samples, if e.g. I sample 2000 samples at 4 MS/s the resolution will be 4000000/2000 = 2000 (Hz) It is possible to improve the situation a bit by zero padding the signal though, this is often referred to as spectral interpolation. Let's say that you have sampled 2000 samples - prior to running the FFT you add n * 2000 zeroes to the sample array. If n =1 this will double your resolution...however we are really just talking about an interpolation effect here so there is a limit as to how much you actually gain. Mads QUOTE (Maci @ Jan 30 2009, 04:39 AM)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.