-
Posts
446 -
Joined
-
Last visited
-
Days Won
28
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mads
-
As others already say here your app needs admin access. I normally add the file association from the installer instead as it will always have the necessary access anyway.
-
Consequences of unreliable run method
Mads replied to Mads's topic in Application Design & Architecture
Because if we did not then surely nothing would happen? The two options you mention are nicely formulated. In the long run I hope NI will implement the first one, but the second one would be fine too. As part of the first option you mentioned a third one, that's my straw of hope here: Perhaps there is an option where another type of method is developed, one that would have limitations in other respects, but which would solve this particular problem preliminarily (until a more fundamental rewrite has become inevitable, one that also allows for an optimal solution to this issue). -
As LabVIEW programmers we are all fairly used to find ways to get around limitations in the development environment and G as a language, however at times those limitations and bugs are just so fundamental that it gets depressing. The graph axis bug that was introduced in LabVIEW 2009 was one such case: if you wanted to use graphs and upgrade to LV2009 you had to rewrite all your code so the scale markers would show up correcly, or wait for months to get the first revised version of LabVIEW (because even fundamental errors like that are not patched instantly....). The good thing about that example though was the fact that you could actually make a fix yourself. Well, now I'm depressed again, and this time the problem is much worse. There are ways to come around this one too, but man - it should not be necessary. I'm a bit surprised it has not been commented before, because it is not really anything new - but once you run into its consequences it is extremely frustrating... The problem: The run method (and related methods like set control value e.g.) runs in the user interface thread - and that thread is blocked if e.g. the user opens a contextual menu somewhere in your application . The consequences: In short, LabVIEW is unable to reliably start an asynchronous background process on demand. - You can try to dynamically scale your software by instatiating a VI and run it with the run method (with Wait until done set to false), but if you need it to always happen within a reasonable time your application cannot have a user interface. It can not, because if it does then the run method might be stuck indefinetly if the user happens to e.g. open a contextual menu... So when does this actually become a problem? Well, in my case its most painful impact is that all client-server interfaces I have in my applications are unreliable because they, in order to allow an undefined number of simultaneous clients, create session handlers dynamically. A client might do a transaction that takes a lot of time, and with multiple simultaneous clients even quick transactions would add up so handling the client transactions in sequence is not possible. Hence the use of dynamically created parallell running session handlers. This works fine, but if the local user of the server should happen to open a contextual menu my server(s) effectively become unavailable to any new connections. One minute my servers are there...the next they are practically off-line to all new clients because the local user happened to open a menu...and if we're really unlucky he might leave the PC in that condition and never return! An unacceptable situation, plain and simple. So how do we get around this? Well, either we stop making software that both has to scale to outside requests, and have a user interface...or we predefine a maximum number of parallell processes and instatiate them in an idle condition at startup. Neither one of these make me proud or happy. PS. When I first noticed this I posted two ideas to the idea exchange (here and here). They do not have any illustrations that catch people's attention, and as it turns out neither of them are very precise, so perhaps a new one should be formulated and illustrated - but it's kind of depressing too that so few people realise their impact.
-
I understand. The optimal solution if you need the functionality that you descibe is probably to split the splash into an optional GUI (the visible splash screen) - and a loader. The latter is the top-level VI of the app, and it will hide itself and do the necessary prechecks etc. in the background. The loader will only launch the visible splash screen if this is enabled. The "GUI" of the splash screen then also becomes a VI you can use as an About Box e.g. :-) Mads
-
A small tip about splash screens: Having the splash screen launch the main program is not always ideal as it prevents you from disabling the splash screen if needed (we have to do this for customers who run our apps on terminal servers e.g. - to save bandwidth (or at least many prefer it that way). A more flexible solution is to launch the splash screen from the main instead - but hide the main until the splash screen has closed. To do this optimally LabVIEW should offer us the possibility to set the run-time state of VI windows to "Hidden"...but that's not available so the next best thing is to set it to minimized...and then hide it immediately by running the necessary method (hardly noticable...unlike if you had shown the window in normal size and then hidden it). This way the main program can check if it is supposed to show a splash screen or not - and if it is - it will hide itself and then launch the splash VI....if it is not - it will just normalize or maximize itself and continue. Mads
-
Yes, but it is not the most elegant way to do it as it may produce a flickering window on the screen - and it will force everything to stop at the instant it is run. A tidier way to do it is to make sure all internal processes are terminated properly and then close every window the application has opened. When the last window is closed the application (and its use of the RTE) will stop. Mads
-
Do anyone know where the 2010 mobile module has gone? I saw some talk about discontinuing it...but has that happened, or is it just not ready yet?
-
If you set the installer to run the batch files they will run with the same access level as the installer, at least that's what my experience is so far. We use batch files to install our app as a service. If run on their own they have to be run by the Administrator...but that is no issue when they are run by the installer.
-
If you are working on Windows (and probably on other OSes as well) you should store the files in a system folder - and use the Get System Directory VI to get the path to it in both applications. Storing files together with applications that are installed in the Program Folder is not permitted on Vista and Windows 7 (no write access, this was true for earlier OSes as well, but back then everyone had the bad habit of being an admin...). The choice of system folder is not straighforward though...you could use the user's document folder or application data (hidden) - but then the location will vary with the user...or you can use the common ProgramData folder...but then only the server will get write access, the clients will only have read access unless you use a tool to edit the access rights. The latter would not be a problem if the server was in fact serving the data to your clients via a data link (TCP/IP e.g.)...and that would be more flexible than to require access for the clients to the local files, but I assume that's out of the scope. Here is a related discussion about where to store files on Windows. MTO
-
On the 3D Picture Control\Helpers palette there is a nice VI that is called Sensor mapping. This VI enables you to import a 3D model, place sensors on the model - and then generate a picture with the readings shown as an intensity plot overlayed on the 3D model. This is *almost* what I would like to use in a new application; except for the fact that the sensors should not be treated as points, but lines. It would also be nice to be able to import a wider range of 3D drawing formats (ideally it should be possible to create some simple 3D models with the app too, but that would just be a bonus). Is there anyone who finds such a task familiar? I'm tempted to oursource this part of the coding as we have enough work to do on other parts of the application, but if anyone has any tips or examples on how to e.g. modify the intensity mapping code in the sensor mapping VI that could be enough. Mads
-
The problem with the public application data folder is that only the user that first ran the program will have access to the configuration files the application created under ProgramData. One might say that the data is not really public...ironically enough. So if a second user logs in and you want that user to be able to see the same application configuration and maybe also do things that will affect it - ProgramData will not work unless you change the access properties of the files in programData. There are plenty of web sites that discuss this dilemma, and it's the reason why I suggested this idea on the idea exchange. This is why we normally use the user's application data folder instead if it's a user-run program, but we use the ProgramData folder if it's a service that always runs under the same user. Mads
-
The PDA module allows us to develop for CE-embedded on tiny SBCs. CE is not really (just) for cell phones...so I really hope they will continue to support that at least. NI is already generating C code and .Net assemblies though so perhaps it's not such a leap to support Win7 Mobile.
-
I'm looking for a low power single board computer that can run LabVIEW applications and is suitable for integration into a measuring device. Is there anyone who has experience with LabVIEW on small SBCs with e.g. Windows Mobile? The two basic tasks for the SBC is to communicate on RS485 with an external board, do some number crunching on some data from this external board and store the results for later retrieval using TCP or another serial link...The power consumption should at least be less than 6W. We have been looking at sbRIOs or the possibility of using the cards inside a PAC unit..(more compact than sbRIO)...but the form factor is too large and the prices are much higher than e.g. an SBC running Windows Mobile or XPe so it looks like we'll need to find a suitable SBC instead. Mads
-
Have you tried switching between themes? Perhaps if you set the theme to Classic and then back to Vista it might refresh the controls.
-
As people here have already mentioned you can still use an OO-like approach: I've made several systems in LV 7 and older where you have multiple units multidropped on different types of communication links. The user configures a set of channels and can add units on each channel. Based on the configuration I instantiate the necessary channel handlers and unit VIs (Open a reentrant reference to the VIs, or use VITs, and run them with the run method without waiting for them to complete...Use a notifier or user event to control their termination, config reload etc.)). The unit "objects" have a generic que based interface to the channel handlers; each channel has an output que and each user of the channel have an input que. The unit configuration tells it what channel it is on, and it uses the channel tag to get a reference to the channel's que. The unit can then send data to the channel using the channel que - and the channel will return data on the unit's reply que when the reply is available (using the tag of the unit embedded in the message it received on it's que). The ques pass clusters that, in addition to the message data and the tag of the user object - contain parameters like timers, expected reply length (if no reply is expected then the channel handler knows that and does not wait for one), errors etc. Unless you make a general plug-in based framework with sub-panel based configuration windows etc. it can be difficult to make the system flexible enough to fit *all* types of protocols and units though. One way I've solved such cases has been to write plug-ins (launched by a general "plugin launcher") that handle certain types of IO outside the standard IO architecture. In many cases these plugins just "translate" the new protocols to one of the "native" ones and hook up to the rest of the system on a standard channel.
-
The multicolumn listbox has two different modes - it can either be a scalar and tell you which row the user has selected, or, if you allow the user to select multiple rows - an array of the row indexes that are selected. You switch between these modes by right-clicking on the control and selecting "Selection Mode". If you write to the value property you can select rows programmatically, if you read it you read the indexes of the rows currently selected. The active cell is used to choose which cell you will be editing programmatically. If e.g. you want to change the background color of the cell in row 2, column 4 - you create a property node where you first set the active cell to 2,4 - and then (just expand the property node to have another property underneath the active cell property) - write your new color to the background color property... I suggest you show contect help (Ctrl+H) and hover the mouse over the control and its different properties, the help window will teach you how to use it, much better than we can do here. Check out the examples that come with LV as well, Help->Find Example->Search tab->enter listbox as keyword...or try searching on ni.com.
-
Sure, making VIs that accept all of the related objects is not a problem...but you still need to create the object, and that means that the callers must either deal with that directly (which we want to avoid, they should just hand off a Notifier name and a user event or que ref of any type)...or you need to make one object creator for all the data types and join them into a polymorph VI. The latter makes the new "Notifier with History" limited to the number of types you have VIs for in the polymorph creator VI (another thing we do not want). The primitive notifier on the other hand has an obtain function that will accept ANY data type, not just objects of one of the correct classes. I'll probably make my Notifier with History limited to just strings this time...that will at least give more or less the same behaviour as the old Notifiers/Ques that used to be string-only too.
-
A side note - but anyway: I did some experiments today just to see how I could implement this "Notify with History", and I played with the idea of using LVOOP for this. For some reason I half-expected to be able to produce unlimited polymorphism this way, ie. in this case the ability to create a notifier of any data type, but to have the top public VIs (obtain notifier, wait on etc.) virtually be the same regardless of the data type of the notifier - but as far as I can see that is still not possible. Even if you limited the number of data types you supported, and had different sub-classes for notifiers of those different data types (to then extract and handle the standard que or event reference that in fact would be at the core of the system) - you would still need an old style polymorph VI to let the callers use the public VIs without them having hard coded the creation of the correct sub-class...right? (Have not used LVOOP much yet...)
-
Mee too:-) That way getting notifier references by name - and handling different data types, would be as clean and effective as it could be - and it's a functionality that complements what is already available.
-
User events will give you much of the "feeder"-logic for free - that's a good idea, however you still need a que manager that lets VIs register their feeds or subscribe to a feed ...The manager would have an event reference lookup table. The point of this is that you do not know how many feeds or subscribers you have at compile time, instead the code will allow you to add these dynamically...e.g. when you create plugins to your application that wants access to a data stream. Ideally this could be something that could be packaged with LV...or OpenG, as it would be a very useful design pattern. Daklu - if there is not too much digging required...that would be very interesting to see yes:-) Mads
-
Notifiers guarantee that all VIs that wait for a notification will be notified, but they do not buffer the notifications (well, you can get the previous one, but otherwise they are lossy). Ques on the other hand have a buffer, but do not guarantee that all VIs will see the incoming data. If one VI deques data from a que - other VIs that wait for data in that que will miss it (making it rather uncommon to have multiple consumers on the same que). Has anyone made a design pattern that combines the "delivery to all subscribers guaranteed"-behaviour of the notifier - with the "delivery of all data" behaviour of the (non-lossy) que? Here is an example of how you would use such a "Notifier with buffer": You have a loop that measures something every now and then. To communicate the readings you have another loop that takes the readings and writes them to a different app using TCP/IP...You implement this as a producer-consumer pattern based on a que....this way the TCP-communication will not need to poll a list of readings and check which ones are new - instead it will sit idle and wait for new data..great. Later however you find that you want to add another function that needs the data. With a regular que this would mean that you would need to modify your producer to actually deliver data to an additional que.... With a "Notifier with buffer" however, you would not need to change the code of the producer at all. You would just register a new subscription to "Notifier name" - and voila, your new code has a lossless stream of the data it needs. The fact that you do not need to edit the producer makes it possible to add such expansions to built applications, if you have given them a plugin architecture. Realising this idea would e.g. be possible by making a Que-manager that pipes data from incoming ques out to multiple subscriber ques. Whenever you have a producer of data - you create a que and give a reference to it to the que manager. If a VI wants to subscribe to the data, it aks the manager, using the name of the que used by the producer, for a reference. The que manager looks up its own list of ques - and when it finds it - it creates an outgoing que and gives the subscriber that reference. It also adds the subscriber to a list it gives to a dynamically created feeder. A feeder VI would be a VI that waits for data from a given incoming que (producer) - and then writes that data to its list of outgoing ques... So far I've only played with this idea..and perhaps I'm overlooking an already existing alternative, or a big flaw?
-
Absolutely Rolf, if anything I would call using DDE this way now a kind of security through obscurity..I do not blame you for laughing Everyone notices if an application opens a TCP-port, you even get a nice warning from the Windows firewall. Most apps I make are servers (so I do not avoid the warning anyway...) I could have just added the command to the existing set of commands supported by my client-server interface...however it does not seem like a good idea to have such a thing available.If I had used TCP I would put this on a separate TCP port and make sure that port would always stay blocked in the firewall and had some password protection that "only" the launcher would be able to pass by accessing local information. There should be a better option for local communication....I can think of a couple, but neither of them are elegant... Mads
-
With the events I mentioned - cursor move, window resize etc. you do not need any of the intermediate events, the really important event is the latest - that is the event that tells you where the cursor was moved last, how large thw window is now etc...and previous positions/sizes are of much less interest....so I do not understand why you see significant problems...after all this would only be a setting that you could active for the events it is suitable for.
-
This goes for all similar events - like the cursor move event, or the window resize event e.g. - the GUI will be affected by how quick the event is handled. This is why I typically just set a flag in the event case and then do the processing of the event somewhere else. The ideal solution would be to be able to set the event structure to have a lossy buffer..i.e. it should only keep the latest unprocessed instance of the event, not stack them up. (It has been proposed on the Idea Exchange already).