Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,780
  • Joined

  • Last visited

  • Days Won

    243

Everything posted by Rolf Kalbermatter

  1. And it clutters those popup menus with so many items that normal working in LabVIEW with them is almost impossible. Rolf Kalbermatter
  2. You seem to think we have millions of years at our hands ;-). Honestly, just do an ASCII string search on the LabVIEW executable. Unix has nice tools for that such as grep! Rolf Kalbermatter
  3. It is nice to look at what you get by this if you have a lot of time at your hands! I haven't yet really found many reasons to actually use it, especially because use of this for production apps might be not such a good idea. As it is all about undocumented stuff really, NI is free to change this functionality at any time, by changing data types, behaviour, or removing the functionality for whatever reason and it won't be mentioned in the upgrade notes at all, so you can end up with some bad surprises when you need to upgrade your app to the next LabVIEW version. Rolf Kalbermatter
  4. No, it isn't from a VI without password. I created it myself. Is that legit? Me thinks so!
  5. If the C code has Endianess problems in itself I wouldn't trust it at all. It would indicate that the code was developed rather in trial and error methods, than by clearly understanding what the algorithme should do. Rolf Kalbermatter
  6. Possibly one more CRC algorithme is in the lvzip package in the OpenG Toolkit. It is used to calculate a 16 bit CCITT CRC for the implementation of the Mac Binary format. Not sure about the correctness in terms of CRC theory of this but it seems to do, what the Mac Binary format requires, whatever that is. Other CRC algorithmes might be found on the Info-LabVIEW archives http://www.info-labview.org/the-archives Rolf Kalbermatter
  7. Well, display of a number in a miriad of formats is that, display only. The number itself does not change in memory at all. So what can you do? 1) Instead of displaying the byte array as a string array, configured to show hex format, you could directly display the byte array, click on the numeric in the array control and select "Visible Items->Radix" from the pop-up menu. Then click on the d that appears and select the numeric format you want to see. This will change how you see the number in the control but will do nothing to the numeric value itself. 2) Wire the byte array into a for loop with autoindexing enabled and use the apropriate To String formating function, either the Format into String with %d, or %x as formating specifier or one of the String Conversion functions such as Numbre To Decimal/Hexadecimal/Octal String. Rolf Kalbermatter
  8. I think they need a little more fine tuning, at least if NI doesn't drop a few platforms before 8.0. For instance Unix alone is a little bit a broad selector. Not everything which might work on Linux might be portable to Solaris, for one example. And the attempt to load the DLL on a Mac and similar issues should also be eliminated. Rolf Kalbermatter
  9. Your question is very unclear. LabVIEW itself is written in standard C and most new functionality since LabVIEW 6.0 has been written in C++. Other than that certain paradigmas are similar to how they are in C, there is no direct relation between the C programming in which LabVIEW is developed and the LabVIEW programming language you are using as a LabVIEW user. If you refer to the scripting features which are not yet officially released but discussed quite a lot here, that is not a language in itself and the term scripting is IMO rather misleading here. It is an interface exposed through VI server which gives the user access to the internal LabVIEW object hierachy. As such it gives a possible user quite some possibilities but the LabVIEW object hierarchy is very involved and nested and programming through this "scripting"interface gets very fast messy and involved. This is probably one of the main reasons the scripting feature hasn't been released to the public (and one of the first complains of most people trying to get into that scripting). Rolf Kalbermatter
  10. This should help. Rolf Kalbermatter Download File:post-349-1109367188.vi
  11. With LabVIEW 7.0 this is basically no problem. The functions to deal with .ico files are available in LabVIEW since about 6.0. Checkout vi.lib/platform/icon.llb. That are the same functions used by the application builder to read ico files as well as replace icon resources in the build executable. In LabVIEW 7.0 you also have a VI server method to retrieve the icon of a VI. Together these two things are all which are needed. There are however a few fundamental problems. The function to replace icon resource data works directly on the executable image (well really on the lvappl.lib file, which is an executable stub which is prepended to the runtime VI library and which locates the correct runtime system and hands the top level VI in that library to the runtime system). As such it can only replace already existing icon resources as doing otherwise would require relocating the resource table and its pointers, an operation which is very involved and error prone. Windows itself doesn't have documented API functions to store resources into an executable image, as this is a functionality not considered necessary for normal applications. lvapp.lib contains only icons for 16 color and 2 color icons for the size 16*16 and 32*32. Wanting to be able to have other icons would mean to add first those resolutions and sizes to lvapp.lib and improving the icon functions in icon.llb to properly deal with those extra resolutions. This is not really difficult to do. A different problem is that LabVIEW icons are always 32*32 pixels whereas Windows really needs 16 *16 pixel icons too, for displaying in the left top corner of each application window as well as in detail view. Rolf Kalbermatter
  12. Excluding very old LabVIEW versions, you can assume that the first 16 bytes of a VI are always the same. In fact any LabVIEW resource file has the same 16 byte structure with 4 out of those 16 bytes identifying the type of file. 52 53 52 43 RSRC 0D 0A 00 03 <version number> ; this value since about LabVIEW 3 4C 56 49 4E LVIN or LVCC/LVAR 4C 42 56 57 LBVW Anybody recognizing some resemplance with the Macintosh file type resource here ;-)
  13. Or you could simply specify a range. Works for strings too! "HELLO ".."HELLO!"
  14. Callbacks in LabVIEW itself are, although possible since 7.0, indeed an anachronisme. But there are situations where it would probably make sense to use them. Callbacks in low level drivers are an entirely different issue. They are one way to allow an application to use asynchronous operations in a way without having to rely on interrupts or such things, which in modern OSes are out of reach of user applications anyhow. For cooperative multitasking systems this is basically the only way to do asynchronous operations without directly using interrupts or loading the CPU with lots of polling. Another possibility to handle asynchronous operations on multtasking/multithreading systems is to use events. LabVIEW occurrences are in fact just that. Eventhough LabVIEW wasn't a from begin on a real multithreading system, for the purpose of its internal diagram scheduling it came as close as it could get to real multithreading. Asynchronous operations are indeed inherently more difficult to understand and handle correctly in most cases. Especially in LabVIEWs dataflow world they seem to sometimes mess up the clear and proper architecture of a dataflow driven system. But they can make a big difference between a slow and sluggish execution where each operation has to wait for the previous to finish and a fast system where multiple things seem to happen simultanously while a driver waits for data to arrive. With the more an more inherent real multithreading in LabVIEW this has become less important but in my view it is a very good and efficient idea to use asynchronous operations of low level drivers if they are avaialalbe. They way I usually end up doing that in the past is translating the low level callback or system event into a LabVIEW occurrence in the intermediate CIN or shared library. Of course such VI drivers are not always very simple to use and synchronous operations should be provided for the not so demanding average user. They can even be based on the low level asynchronous interface functions if done right. But as long as you stay on LabVIEW diagram level only, callback approaches seem to me in most cases a complication of the design which is not necessary. As you have properly pointed out, having a separate loop in a LabVIEW diagram handling such long lasting operations is almost always enough. That is not to say that Jims solution is bad. He is in fact using this feature not strictly as a callback but more like a startup of seperate deamons for the multiple instances of the same task, a technique very common in the Unix world. In that respect it is a very neat solution of a problem not easily solvable in other ways in LabVIEW.
  15. Basically during development directories are a lot better. For distribution llbs may be handy. Library structure Pros > * Unused files are automatically removed from library at save time. This is not true. You have to load the top VI and selecte Save with Options to create a new directory structure or library which only contains the currently used VIs. This is the same for LLbs and directories. > * Upgrading your software to newer LV versions is easy because libraries can hold all the device drivers as well I think you talk about instrument drivers. Keeping them in a subdirectory to your project dir would achieve the same. * Moderate compression of maybe 10 - 30 %. Nothing really to write home about.With harddisks costing dollars per GB and the fact that decompressing will also take performance everytime LabVIEW needs to load the VI into memory. For archiving purposes it is a good idea to ZIP up the entire source code tree anyhow. Cons > * Returning the files to directories can't be done, or at least I don't know how. Wrong as pointed out. > * Referencing a file inside the library might be possible, is it? I think not. With VI server for sure. If you talk about accessing in in Explorer this is possible too. LabVIEW 7 and later has a feature in Options which installs a shell extension which allows you to browse LLB files (and see the VI icons of any LabVIEW file in the Explorer) > * If naming is not done correctly, it can be a drag to find a certain file from large library. Again with above shell extension (assuming you use Windows ;-) this problem is eased upon. * Source code control won't work easily. Directory structure Pros > * Easy to group logical set of files to different directories Yes! > * With dynamic calls, run time editing is possible No difference to LLBs. You just have to consider that LLBs are handled by LabVIEW like an additional directory level with the LLB name as directory name. * Source code control is much more effectively possible Cons > * Upgrading LV version changes device drivers to new version also. They may but more probably may not work. No difference to when using LLBs. You will need to keep the instrument drivers together with your project for that to work, but that is also what is happening when they are copied inside the LLB. LabVIEW has also no preference to use VIs in an LLB instead of in a directory. It simply will try to load the VI from where it was last located when the calling VI was saved and if not found will start to search its standard paths indifferent if they are in LLBs or not.
  16. As long as it is about retrieving version information for executables and dlls you should try to look for the GetFileVersionInfo() and VerQueryValue() functions. Not really sure if they are in user32.dll. MSDN says they are provided by version.lib which would usually mean there should be a version.dll somewhere. Changing this information on existing files is a VERY bad idea as it will screw up installers which you might try to use to upgrade or uninstall existing applications. Accessing property pages of other Explorer namespace objects is very difficult to do without COM (the base of OLE) object handling.
  17. I have meant to try to get something done but never got around it. And yes you can't do without DLL or CIN. Georgo Zou has however in his Toolkit functions to just do that. Rolf Kalbermatter
  18. Another idea might be to use sub panels for this. Create the control as a subVI with all the code you might feel necessary and allow some sort of communication to the main VI for instance through user registered events or queues. Then whenever you need the control insert it with a simple call into a sub panel control. Rolf Kalbermatter
  19. Well that is not entirely true. It's only true because you usually don't have folders with endings but if you name your folders for instance Folder.dir a search pattern of *.dir will return those folders. Of course I agree with you that looking for particular file patterns is a nice feature and I thought the OpenG version does that also. The problem with this is the requirement of the two List Directory Nodes for each hierarchy level which will almost take double the time than one node does. I think with preallocating the arrays and such one can win another few percent but the real time spent is in the List Directory node.
  20. And I'm positive that they actually use some cryptographic algorithme such as MD5 or similar to protect the password so trying to fake the password is a rather difficult if not useless approach. The only way I could see is in patching the LabVIEW executable bij removing the password check altogether. But that is beyond my abilities.
  21. Your approach will work too, but since your data is really a 16 byte short in big endian format you could take advantage of the fact that LabVIEW's flattened stream format is also normalized to be in big endian. Just pick out the interesting bytes and wire them to the Type Cast function in Advanced->Data Manipulation. Wire a int16 constant to the middle terminal e voila you get your numeric 16 bit signed value. This is simple fast and will work on any LabVIEW platform independant of the underlying endianess of the CPU, since LabVIEW takes care of the necessary (if necessary) byte swapping for you. If the number in the stream would be in little endian format you would just have to add a Byte Swap node from the same palette to the wire after the Type Cast. Really swaps bytes twice on little endian machines (only Intel CPUs ;-) but it's most probably still faster than doing the byte extraction and joining on your own.
  22. Hi Michael I just took a look at your List Directory and thought there would be something to improve. Look at the attached version. On my computer it is almost two times faster in enumerating large directory trees even if it is called first in the timing test to not give it advantage through Windows directory list caching. Download File:post-10-1081496463.vi
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.