-
Posts
453 -
Joined
-
Last visited
-
Days Won
30
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mads
-
It's not just the native configuration file VIs that cause this, the openG code seems to be slow on its own. I especially noticed this now that I'm using them on a RIO controller...just reading in a section containing a cluster of 3 small 1D arrays (10 elements in each) takes 2,5 seconds with the OpenG read section VI. I typically have to read more than 10 of these sections so that takes a whopping 25 seconds or more. Because of compatibility issues with existing software it's not an option go use a different file format so I'm stuck with the configuration files. I've just skimmed the code and I saw that the decoding of the arrays use a recursive call, is that what makes it so slow? I'll have a closer look at it myself when I've got the time, but has anyone looked at this already and found any optimizations?
-
To make the GUI ignorant of the cluster container for tab navigation should be doable without causing any huge problems. That would solve the most annoying problem. The ideal solution however would be to have clusters that the control panel is unaware of alltogether - so that the controls/indicators could be placed independent of eachother even if they were in a cluster. To keep things simple for beginners they could keep the old type of clusters, but add a "diagram-only cluster" for advanced users. If you want you should e.g. be able to include a control/indicator in a cluster by right-clicking on it and "cluster" it. You define a cluster much like a type definition, and then you can add or remove things from that cluster by right-clicking the control and select a defined cluster to add it to. Yes, you can always drop the use of clusters to not get the GUI problem, but on the diagram it's very neat to work with clusters instead. QUOTE (rolfk @ Apr 25 2008, 06:12 AM) To reply to my own thoughts here...perhaps the right-click to cluster should not be used since that ties into the control panel...instead you could create a cluster container on the diagram and drag the terminals into it instead...Isolating the clustering to the diagram. QUOTE (Mads @ Apr 25 2008, 08:59 AM) To make the GUI ignorant of the cluster container for tab navigation should be doable without causing any huge problems. That would solve the most annoying problem. The ideal solution however would be to have clusters that the control panel is unaware of alltogether - so that the controls/indicators could be placed independent of eachother even if they were in a cluster. To keep things simple for beginners they could keep the old type of clusters, but add a "diagram-only cluster" for advanced users. If you want you should e.g. be able to include a control/indicator in a cluster by right-clicking on it and "cluster" it. You define a cluster much like a type definition, and then you can add or remove things from that cluster by right-clicking the control and select a defined cluster to add it to. Yes, you can always drop the use of clusters to not get the GUI problem, but on the diagram it's very neat to work with clusters instead.
-
Not an answer - but this is something I've flagged to NI a couple of times: There is no reason whatsoever for the user interface to care about cluster containers. Tabbing should ignore the fact that the controls or indicators are part of a cluster, that is only relevant for the program code, not the GUI. QUOTE (cmay @ Apr 24 2008, 01:21 AM)
-
Well, how large arrays are you using and what is the write wait time? You can generate up to 3MB/s and the file IO will handle that, the data formatting will not run any faster. Unrelated to the problem at hand, but a tip for the future: The code can be written much more compact (both in logic and display size)..attached is a picture of basically the same approach(no optimization of the logic though, that is still the same as in your speed test). Not the optimal code that either, but much easier to read. QUOTE (alexadmin @ Apr 10 2008, 12:31 PM)
-
Like you say, File IO is not the problem - it is faster than the data generation. Formatting the data takes 0,94 us per byte on my machine (generating 3 bytes of formatted data), that means that the maximum rate of data to write to the file that can be generated is: 3 bytes/0,94 us = 3191489 bytes/s = 3,04 MB/s That is the same file write speed I achieved yesterday...in other words, it's not a problem to write the data, however you cannot format it any faster than 3 MB/s. I'm not sure why you only get a few hundred KB/s, but it could be bacuse you actually use a wait in the write loop and have a very small array...with e.g. an array of 50 and a wait of 10 ms that loop will only generate 14,6 KB/s. If the sampling device is outputting data faster than this I would skip the formatting and just write the data directly to disk. You could then generate the formatted file at a different time - or in a parallell loop. Ironically this would swap the whole approach - put the file IO in the same loop as the data sampling...but separate the formatting loop - it's too slow:-) Mads QUOTE (alexadmin @ Apr 10 2008, 08:14 AM)
-
As others have commented here buffering the data makes the file IO much faster, and it could be an idea to spearate the logging from the sampling. If I assume the USB returns about 1000 bytes on each read, doing the formatting and writing the data to a file the way you do it runs at about 2,2 MB/sec on my machine. One trick you can apply to bump that up a bit is to not build the string the way you do it (feedback node etc.) - instead you just output the string and let it be autoindexed by the for loop. You then use a single input concatinate on the output array to get the string. The speed you gain by this relates to the length of the arrays, however with a 1000 byte input the logging went up to 3,1 MB/sec just by doing this. The fact that things slow down that much when you do the sampling in the same loop might point to a problem with that part rather than the file IO...How fast does that part run if you skip the file IO? On a side note I would suggest that you try to keep the diagrams more compact - it was barely readable on a 1280*1024 display, and there was not really any reason for it, the code could easily have fitted vertically. If you need space on your diagram later on just hold down the Ctrl-key while you click and drag the area you want on the spot you need it on the diagram...When programming in LabVIEW it's also a good thing to trust (and/or make) the data flow to drive the execution. Most of the sequence structures you have are either unnecessary or could easily be replaced by pure data flow. QUOTE (alexadmin @ Apr 7 2008, 01:50 PM)
-
QUOTE (Michael_Aivaliotis @ Mar 13 2008, 08:58 AM) Cool idea, having a GUI gallery. It's often difficult to figure out good designs so inspiration - or stealing some good ideas - is great:-) LabVIEW GUIs have a tendency to look...well - like LabVIEW GUIs, non-standard colors, 3D buttons etc. The examples so far avoid most of that, but there is a big no no in the upper left corner of the windows - a window name that ends with .vi. Aligning and making all of the controls in a group the same size would be another tip, makes it look a bit cleaner. It's always easier to criticize than to make things yourself though:-)
-
Try logging into the 2003 server by setting remote desktop in consolre mode: You start remote desktop in this mode by running: MSTSC /CONSOLE This will allow you to log into the existing active session. QUOTE(Donald @ Feb 29 2008, 01:12 PM)
-
I've solved this kind of problem by creating a server application that administrates the access to the port and devices on the port. This "Port Share"-application allows multiple applications to share the same ports (the middleware is the only application that actually use the ports), and it has commands that enables the apps to reserve a port and/or a spesific device (on a multidrop link) if they are in an operation that requires exclusive access for a while (ports are reserved during heavy data transfers, devices are reserved when they are used for tasks that cannot be interrupted by commands from others). In my apps the serial communication is handles by a comhandler-plugin with a que-based interface so all I had to do to switch to the port share system was to replace that comhanler.vi with one that acts as a port-share client instead...(off course to fully use the port/device reserve functions you typically need to implement that at a higher level though). I've thought about releasing this software to the public, however there are some rights-issues to solve for that to happen....The idea is free though:-) QUOTE(Khodarev @ Feb 6 2008, 08:55 PM)
-
I do not think the reference is closed automagically, there must be some bug somewhere that closes or looses the reference. Is it able to write once and then never again, or? Does the file exist? The shown code never creates it (open, not open or create as action into the open/create/replace file function)...If it is a preexisting file, have you tried generating a new one? If you feed the que twice and probe the reference and error outputs in this logging VI, does the reference ever change and where is the first error? Mads
-
Include other folders than the support folder
Mads replied to Mads's topic in Development Environment (IDE)
That's perfect, thanks for the help I had overlooked the add button on the destinations tab. Regards, Mads PS for NI: It would have been great to be able to add it directly in the set destination input on the source tab as well (then I would have found it), but that's a minor issue. -
In the app builder of LV8.5 you can add a folder to the source list. I have several folders where I store plug-in VIs and tools (VIs or libraries) so it would be nice if I could add those to the source list and specify that their diagrams should be removed...however even though you actually add a folder it will not add a folder - it will just take the content of that folder and place it in a directory which is limited to be either the support folder, the exe folder or the folder of the caller.... It seems very strange that they could not add a user specified folder option there. So - this seems to mean that I have to do what I used to do - namely to create folders manually, save all the VIs and libs without diagrams manually (although I did make a VI that does this automatically) and then include those folders in the installer. To sum up - is there a more streamlined way to achieve this or should this be a feature request for the next LV? Perhaps the OpenG app builder can help with this? I've not used it yet... Mads
-
The path does not refer to the correct VI in the sample, however I'm not sure why you insist on using a call by reference node :question: I've updated your sample to a working one. As you will notice I've also added a wait in the subVI loop, you should always have something that prevents the loop from running as fast as possible, otherwise it will use up all the CPU time. On a side note I would also recommend sticking to dialog controls and colors, that will give the users a familiar and professional looking interface, the advantage of using a OS-standard interface should not be underestimated. I also noticed that you are using the place as icon feature for the diagrams...that is the default and it's probably easier to read for beginners, however it takes a lot of space so in the long run it's a good idea to not use that, just a tip The example can easily be expanded to show cool ways to use the run method...you could e.g. make the subVI a template and have as many of them running in parallell as you want. You can also use e.g. a notifier or read the front panel properties in the subVIs to close all windows when the main app window is stopped.
-
+ If you want the subVI to run in parallell with the main VI you need to run it using the run method like I showed in the previous reply, not a call by reference node.
-
You have two options - either have a separate loop in the main VI and (statically) call the subVI from there (then only that loop will wait for the SubVI to return, the rest of the main VI will continue to run), OR call the subVI dynamically by opening a reference to it and then use the run method on that reference with the wait until done set to false. To set the inputs of the subVI you use the Set Control Value method: The latter method is great in many situations - I use it to call VI templates (.vit files), then you automatically get a unique instance for each call. A typical use of this is to allow the users to open as many graph windows as he wants... Regards, Mads
-
Just change the name from LabVIEW to "G" and you are halfway there. The "Lab" makes it sound like a quirky (nonprogrammer) engineering tool.
-
After 500 ms it is not unreasonable that you've only received 500 bytes, at least if the device has a delay in its response...so if the message is longer you could just wait a bit more prior to checking how many bytes you have received. The buffer length can be adjusted, however it's a good idea to not rely on the buffer. A general receiver (when no termination chars are used) checks the number of bytes in a loop. After each check wait a certain time prior to the next check and if no more data has been received terminate the loop. This "interbyte" wait should at least be longer than the transmission time for a single byte (preferably a bit longer, I typically use 20-50 ms) . It can also be a good idea to have a timeout so that the loop will stop if no reply ever comes and/or if the reply turns out to be continous.... Attached is a VI (SerialIO) for such serial communication (it can be used for read, write and read&write operations). Regards, Mads
-
If the string is binary and has the MSB first then just wire the read string into a type cast and set the type to an array of integers. The resulting array can be wired directly into a graph terminal. If the string is LSB first then you'll need to swap the bytes prior to conversion, one simple way of doing that would be to just reverse the string, do the type cast and then reverse the resulting array. Make sure you have received an even number of bytes prior to convertion, otherwise the last number may be invalid... If the string is in readable form the conversion depends on whether it is in digital, hex or something else. If you take a subset of the string (two and two bytes at a time) you can use the string to number functions to get the number array for the graph. Regards, Mads
-
I've just migrated to Vista and LabVIEW 8.5 and was thinking about creating new 48 and 256 pixel, full color icons that Vista supports for my applications, however even though I successfully built an application with such a set of icons something strange happened: If I view the application with a large or very large icon it still shows the LabVIEW icon instead of my customized one. At first I though it was the app builder that lacked the functionality to include the larger icons in the application (the icon editor obviously does), but if I open the built executable with Microangelo Librarian I can see that it does indeed have the custom icons in it - Vista just does not show them for some reason. I've always been annoyed by the fact that NI forces their name and logos into the installers when I really want the development tool to be invisible to the end users (You don't see the installation of your average app show Microsoft logos and ask people where to save the Visual Studio files...), however this will make the LabVIEW logo pop up instead of my app icon whenever the users choose to enlarge the icon view. Have any of you seen this as well and found a way around it (other than a different development tool off course)? Mads Now where is the delete button when you've been a bit too hasty (membership required?)... It turns out this is more of a Vista bug- it refused to update the large icon views properly. The built app kept the default LV icon in the large views, but if I copied the executable to another directory the icons were displayed as they should. Puh! Mads
-
Having tested this a bit more the bug is still there, however its nature is slightly different than it first appeared: Setting the FP.WinBounds does not show the window, however once the window is diplayed (using the FP.Open porperty) LabVIEW spends time rendering the window based on the FP.WinBounds properties first and THEN renders the window according to the arguments of the Open.FP property. This means that if you opened the panel in a maximized state you will not see the window appear in a maximized state - it will first appear as specified by the WinBounds and then maximize itself - making the opening an ugly two step procedure instead of the proper behaviour seen in LV 7.1.1 (where the user just sees the window pop up in maximized mode). Download File:post-1777-1166111124.vi
-
In LV 8.2 setting the FP.WinBounds property on a hidden (using the FP.Open invoke node) window makes the window visible. This did not happen in LV 7.1.1. The help file seems to indicate that it should still be possible to set the property in the background like this by hiding the window, however that does not seem to be the case. In LV 7.1.1 you could position the window and then choose to show it e.g. in a maximized state (making the window appear in the defined position whenever the maximization is turned off), I am not sure how this can be achieved in LV 8.2(?). The reason why I want to do this is because setting the runtime position in VI properties -> Window Run-Time position is not an option if you also want the run-time state of the window to be minimized (ideally it should be hidden, but that's not an option).
-
Make a VI and in its Window Apperance Properties set it to run transparently (100%). Add some code in the VI to call a SubVI that will display its front panel when you click a button, and add a property node that sets the transparency back to 0% once the VI is running. Run the VI and call the SubVI...notice what happens when the front panel of the SubVI opens - the main VI front panel will flash, making the transision ugly. To see the difference, remove the "Window runs transparently" property, and the property node - and run again. Now you get the normal smooth transition. The reason why I wanted to make the main VI transparent on startup is because I wanted it to call a splash screen (instead of the splash screen calling the main VI) and the main should then stay invisible (no startup flash either) until the splash screen closes (it would have been great if you could set a VI to open in a hidden state, why have they not added that to the list of options:-( ). There are ways to work around the problem so its no big deal in this case, however it bugged me...so perhaps its a bug(?):-)
-
Yes, you will need to write a handler for this using events, however it is not that hard. When the drag is started you get a drag start event ("Drag Starting") that gives you the data that is dragged (not just one row). The data comes as a an array of variants, but you can use variant to data to get what you need. Store that data and if the data is dropped onto a second list you can copy/move the data in code. To ensure the dropped items are from the other listbox scan the "Available Data Names" output from the event for LV_LISTBOX_ITEMS and only perform the drop if you find it... Mads
-
Data only recursion vs data&function(object) recursion
Mads replied to Jacemdom's topic in LAVA Lounge
Quicksort is an example of a function that is naturally made by using recursion...however it is too slow in LV so it is faster (but less elegant) to do it without recursive calls. I needed recursion for the development of this code: http://zone.ni.com/devzone/cda/epd/p/id/12 Mads -
I normally use readable files for configurations and the philosophy is that if a user wants to screw up the system he will be able to anyway (if he is not able to read the content of a file he can still change or delete it so encryption is not much help), the only important thing is to prevent him from doing so by accident. If he does delete or edit a file intentionally then that's his fault, not the software (you have to draw a line of responsibility somewhere). You can off course reduce the risk by making the files less accessible and/or make the content look less tempting to edit (adding warnings within the files is an option). If possible the software should also be able to detect and filter out invalid configurations. I store most configurations in the Applications Data folders. The only time I use encryption is when I store information that needs to be secret (like passwords, proprietary parameters etc.). Mads