-
Posts
446 -
Joined
-
Last visited
-
Days Won
28
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mads
-
Yes, you have a helper executable - that's what I call the "Launcher". When the whole application set is installed it adds entries to the registry that associates certain files with the Launcher...This way the OS runs an instance of the launcher...which in turn runs (if necessary) and transfers the file path to the main application. I have extracted the core of the code and attached it here... I have *not* taken the time to make it into a fully working example as that would take more time than I have right now...but you should be able to pick up the main ideas. OpenG VIs are required... It is about time that we get full support for file associations in LV..so Kudos that idea on the Idea Exchange everyone, please. Mads File Launcher Example.zip
-
I do it this way in my applications. The loader app gets the file path from the argument generated by the OS due to the registry settings - and uses DDE to transfer the path of the opened file. If the DDE connection fails it assumes that the app is not running yet and starts it...and waits until it gets the DDE link. I do not launch the target app with arguments simply because I want it to work smoothly when opening multiple files: I have set the .ini file of the launcher so that it runs multiple instances (AllowMultipleInstances=True)...that way people can open multiple files in one go. The reason why I chose DDE instead of TCP is because it is less of an open door to the outside world...although I would prefer it if there was indeed a way to esnsure that the communication was only possible locally on the PC (without using the firewall). ActiveX could be an option...as DDE really is a very old technology and might get phased out soon. Mads
-
I'm waiting for the Developer Suite first....we use VLM as well (does anyone know if we need a new license file and when the upgrade DVDs will arrive? We just received new DVDs that still had 8.6.1...waste.) One question that those of you who have installed it already might know; is it possible to build 32 bit apps in LabVIEW 64bit? We'll probably need to support 32 bit systems for a while, but it would be nice to run the 64 bit version of LV and just have it as an option to build the app as 32bit.
-
Hey, thanks - I got around the glyph by editing a ring control after all...but then I had not noticed that they can click the glyph as well in VIPM. The part I missed in this case was the "-" though...I tried _ .... Thanks again!
-
In VIPM there is a GUI element I'm curious as to how they made: the LabVIEW version and Filter by installation status rings - end the search box. They all have glyphs next to the text...and in the former there are separator lines between some of the entries, just like in a menu... One way to get the glyph is to edit a Modern style menu ring and replace it's arrow element...(and replace the rest with system style elements) but the arrow is moved with the right side of the control so that get's messy if you want to rescale the control... Any tips (or a new article on the How did they do that page on the VIPM site:-))?
-
If all you need to resize on the front panel is one graph then right-click on that graph and set it to scale with the panel. In order to ensure that other controls stick in their correct position and/or relation to eachother you can group the controls and/or control where they stick by placing them in or outside the right quadrant of the scaling frame of the graph. If e.g. you want them to stick to the top of the scaling frame the group of controls need to have at least one element that is above the top of the resizing frame. In cases where you have multiple objects that needs to resize the solution is splitters. You can adjust the thickness of the splitter - if it is only a couple of pixels it will in fact not even show when you are running the VI...It's often OK to have the splitter visible though, tells the user something about how the panel will behave. Mads QUOTE (Michael Malak @ May 27 2009, 12:26 AM)
-
We used to have a poster at the company where I work that said something like: "For every existing customer you lose you have to recruite 10 new ones to compensate". It may not always be true, but I think it's a very good rule to operate by. One small thing that could improve LabVIEW in this aspect would be to let users choose between a set of user profiles during the installation process (for those of us that have Volume Licenses it would be nice to have an *easy* way to include our own such profiles in the installation sets as well), that way we would not need to repeatedly deal with the "Express mode" that they have chosen to be the default. QUOTE (hooovahh @ Apr 16 2009, 02:37 PM)
-
The MGI VIs are in fact just as slow as the OpenG Config file VIs on my PAC units. The OpenG VIs have their main weakness in the use of a recursive call. The time consuming part in the MGI Read VI is the "Get Type Info.vi"...which has a locked diagram so I am unable to investigate it any further... QUOTE (Yen @ May 27 2008, 07:05 PM)
-
As previously mentioned the resolution is decided by Sample rate / number of samples, if e.g. I sample 2000 samples at 4 MS/s the resolution will be 4000000/2000 = 2000 (Hz) It is possible to improve the situation a bit by zero padding the signal though, this is often referred to as spectral interpolation. Let's say that you have sampled 2000 samples - prior to running the FFT you add n * 2000 zeroes to the sample array. If n =1 this will double your resolution...however we are really just talking about an interpolation effect here so there is a limit as to how much you actually gain. Mads QUOTE (Maci @ Jan 30 2009, 04:39 AM)
-
Superb, your base64 code did the trick, thanks As for the weakness in the unzip-function I tried downloading the latest version (mine was the previous version) and to my luck the issue seems to have been adressed there, so now absolutely everything works as it should :thumbup: :thumbup: PS. The OpenG base64 decoder still has this problem, that code is not maintained anymore (I think(?))...so replacing it, e.g. with Mark's code, might be useful for others as well. Mads QUOTE (mesmith @ Jan 15 2009, 02:54 PM)
-
I am using the Open G POP functions to fetch and decode e-mails with attached zip files, and have run into a rather "fun" combination of errors: 1. If the (zip) file has two zero-bytes at the end it will be reflected in the base64-encoded data, however only one of them will come through when the OpenG base64 decoding is applied. 2. If a zip file lacks that final zero-byte the OpenG unzip-functions (i.e. the dll they use) will crash. So - in my case my code receives and writes a zip file, and that zip file is read and decompressed just fine with other zip-software, however because the same software is trying to use the OpenG unzip function it ends up crashing the whole software.... I can probably figure out a way to edit the decoder so that it does not lose any bytes (perhaps this is a known issue that is routinely solved?), however it is still a bit uncomfortable to know that the unzip function is possible to crash by feeding it a file with such an error. Mads
-
Endevo GOOP Development Suite v3.0 - now released!
Mads replied to Jan Klasson's topic in Announcements
Sure, if the drawing code was made into a general API then I would buy it... For graphs you could have an API where all you needed to do was to feed a reference to the graph to a VI from the tool...and voila - now you can draw figures and write text on the graph. :thumbup: -
Endevo GOOP Development Suite v3.0 - now released!
Mads replied to Jan Klasson's topic in Announcements
A sidetrack...but anyway: In the UML Modeller you have a drawing tool that would be great to use for other purposes. In my case I need the ability to add text comments, arrows, boxes etc. to a graph...as part of a report tool. The picture properties of the graph control makes integrating the picture and the graph simple enough...but I still need code to handle all the objects so that they can be moved, resized, deleted etc. I assume you do not have any plans on making that part of the package open source ... ExpressionFlow has a good example that lets you draw multiple figures and delete them sequencially, but are there anyone who has made a more complete drawing package in LV that is available for others? -
You communicate with the Hilscher card by calling the Cif32dll.dll. The first step is to call the function DevOpenDriver (stdcall) and input the card device number (you can set up the profibus network in Sycon first). The next step is to call DevInitBoard(), the parameters there is the same device number and a text you can set to N/A. Reading and writing to the bus is done using the DevExchangeIO function: Data transfers are done in blocks of 32 bytes typically so you need to split and write or read and join multiple blocks. You can find details about the dll from Hilscher.
-
How to scale UI for Different Monitors
Mads replied to dannyt's topic in Development Environment (IDE)
It is a good idea to decide on a minimal resolution you want to support and design everything to work there and scale upwards. Depending on the GUI you may allow the users to scale the window down from there as well, but at least put in a minimum to ensure that none of the objects that are set to scale get too small to be used. It can be frustrating to work on a small control panel when you have so much screen real estate, but the diagrams can still be as large as you like...although personally I think it is a bad habit to allow those to grow too large as well... Generally I never scale buttons and numeric controls...most of the dialog controls are typically kept at a fixed size in other applications so I stick to that standard behaviour. The important thing then is to group the objects and place them so that they do not change their separation or location during window scaling. A graph on the other hand e.g. is a typical object to scale with the window. If I need to scale multiple objects properly it can work to group those objects, however a more robust solution is to use window splitters. Truning on the "Maintain all proportions" or the "Scale all"-option is way to crude. Mads QUOTE (dannyt @ Oct 23 2008, 10:07 AM) -
The search could be made binary instead, that would cut the number of Nth line calls dramatically... QUOTE (normandinf @ Sep 20 2008, 02:38 AM)
-
I second your request. I have several applications where the lack of this feature forces me to use a less than optimal GUI...
-
Runtime Engine 5.0.1
Mads replied to sandesh's topic in Application Builder, Installers and code distribution
I've got LV CDs all the way back to 4.0, however is it really the run-time you want? If you want to see the code and not just a running application you will need to install the development environment. -
This is not a bug, it's the established way LV decides the order in which to do things: If you have a property node it will execute the actions on it from top to bottom. All you have to do in the first case is to swap the positions of the properties. QUOTE (Aitor Solar @ Jul 15 2008, 12:06 PM)
-
It's not just the native configuration file VIs that cause this, the openG code seems to be slow on its own. I especially noticed this now that I'm using them on a RIO controller...just reading in a section containing a cluster of 3 small 1D arrays (10 elements in each) takes 2,5 seconds with the OpenG read section VI. I typically have to read more than 10 of these sections so that takes a whopping 25 seconds or more. Because of compatibility issues with existing software it's not an option go use a different file format so I'm stuck with the configuration files. I've just skimmed the code and I saw that the decoding of the arrays use a recursive call, is that what makes it so slow? I'll have a closer look at it myself when I've got the time, but has anyone looked at this already and found any optimizations?
-
To make the GUI ignorant of the cluster container for tab navigation should be doable without causing any huge problems. That would solve the most annoying problem. The ideal solution however would be to have clusters that the control panel is unaware of alltogether - so that the controls/indicators could be placed independent of eachother even if they were in a cluster. To keep things simple for beginners they could keep the old type of clusters, but add a "diagram-only cluster" for advanced users. If you want you should e.g. be able to include a control/indicator in a cluster by right-clicking on it and "cluster" it. You define a cluster much like a type definition, and then you can add or remove things from that cluster by right-clicking the control and select a defined cluster to add it to. Yes, you can always drop the use of clusters to not get the GUI problem, but on the diagram it's very neat to work with clusters instead. QUOTE (rolfk @ Apr 25 2008, 06:12 AM) To reply to my own thoughts here...perhaps the right-click to cluster should not be used since that ties into the control panel...instead you could create a cluster container on the diagram and drag the terminals into it instead...Isolating the clustering to the diagram. QUOTE (Mads @ Apr 25 2008, 08:59 AM) To make the GUI ignorant of the cluster container for tab navigation should be doable without causing any huge problems. That would solve the most annoying problem. The ideal solution however would be to have clusters that the control panel is unaware of alltogether - so that the controls/indicators could be placed independent of eachother even if they were in a cluster. To keep things simple for beginners they could keep the old type of clusters, but add a "diagram-only cluster" for advanced users. If you want you should e.g. be able to include a control/indicator in a cluster by right-clicking on it and "cluster" it. You define a cluster much like a type definition, and then you can add or remove things from that cluster by right-clicking the control and select a defined cluster to add it to. Yes, you can always drop the use of clusters to not get the GUI problem, but on the diagram it's very neat to work with clusters instead.
-
Not an answer - but this is something I've flagged to NI a couple of times: There is no reason whatsoever for the user interface to care about cluster containers. Tabbing should ignore the fact that the controls or indicators are part of a cluster, that is only relevant for the program code, not the GUI. QUOTE (cmay @ Apr 24 2008, 01:21 AM)
-
Well, how large arrays are you using and what is the write wait time? You can generate up to 3MB/s and the file IO will handle that, the data formatting will not run any faster. Unrelated to the problem at hand, but a tip for the future: The code can be written much more compact (both in logic and display size)..attached is a picture of basically the same approach(no optimization of the logic though, that is still the same as in your speed test). Not the optimal code that either, but much easier to read. QUOTE (alexadmin @ Apr 10 2008, 12:31 PM)
-
Like you say, File IO is not the problem - it is faster than the data generation. Formatting the data takes 0,94 us per byte on my machine (generating 3 bytes of formatted data), that means that the maximum rate of data to write to the file that can be generated is: 3 bytes/0,94 us = 3191489 bytes/s = 3,04 MB/s That is the same file write speed I achieved yesterday...in other words, it's not a problem to write the data, however you cannot format it any faster than 3 MB/s. I'm not sure why you only get a few hundred KB/s, but it could be bacuse you actually use a wait in the write loop and have a very small array...with e.g. an array of 50 and a wait of 10 ms that loop will only generate 14,6 KB/s. If the sampling device is outputting data faster than this I would skip the formatting and just write the data directly to disk. You could then generate the formatted file at a different time - or in a parallell loop. Ironically this would swap the whole approach - put the file IO in the same loop as the data sampling...but separate the formatting loop - it's too slow:-) Mads QUOTE (alexadmin @ Apr 10 2008, 08:14 AM)
-
As others have commented here buffering the data makes the file IO much faster, and it could be an idea to spearate the logging from the sampling. If I assume the USB returns about 1000 bytes on each read, doing the formatting and writing the data to a file the way you do it runs at about 2,2 MB/sec on my machine. One trick you can apply to bump that up a bit is to not build the string the way you do it (feedback node etc.) - instead you just output the string and let it be autoindexed by the for loop. You then use a single input concatinate on the output array to get the string. The speed you gain by this relates to the length of the arrays, however with a 1000 byte input the logging went up to 3,1 MB/sec just by doing this. The fact that things slow down that much when you do the sampling in the same loop might point to a problem with that part rather than the file IO...How fast does that part run if you skip the file IO? On a side note I would suggest that you try to keep the diagrams more compact - it was barely readable on a 1280*1024 display, and there was not really any reason for it, the code could easily have fitted vertically. If you need space on your diagram later on just hold down the Ctrl-key while you click and drag the area you want on the spot you need it on the diagram...When programming in LabVIEW it's also a good thing to trust (and/or make) the data flow to drive the execution. Most of the sequence structures you have are either unnecessary or could easily be replaced by pure data flow. QUOTE (alexadmin @ Apr 7 2008, 01:50 PM)