Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. It doesn't seem to work overseas. Whatever serial number I could come up with from all the different SSP shipments etc. it kept telling me that it is an invalid serial number for the product I try to activate. NI support wasn't very clear but claimed that it doesn't work yet and I should just continue to use evaluation mode until I receive the proper SSP shipment. But that dialog at startup is for sure annoying. Rolf Kalbermatter
  2. Because in all LabVIEW versions up to 7.1 the application builder has no possibility to add VISA or any other IO library installer such as NI-DAQ, NI-488, etc. They all do come with the apropriate hardware or can be downloaed from the NI site or copied from the Device Driver CD-ROM. In the case of VISA or even worse NI-DAQ it is not very likely, that everybody would want his installer bloated with 100ds of MBs of driver installation, which someone might have to reinstall anyhow because there has been a new driver version released since you built your app. Rolf Kalbermatter
  3. LabVIEW itself has been always a nice citizen as far as undesired interaction with other installations on the same machine is concerned. I do not expect any difference with LabVIEW 8 unless the packagers at NI somehow messed up and built a bug in the installer. With add-on toolkits it is sometimes a different story. All native LabVIEW add-on toolkits should be fine but others such as IMAQ can be a bit problematic in certain cases. You may sometimes need to install certain bug fixes and definitely allow the Toolkit installer to also install compatibility VIs for older LabVIEW version into their respective directory. With drivers you have to be carefull to allow the driver installation (for instance NI-DAQ) to upgrade the (DAQ) VIs in the older LabVIEW systems, to sometimes allow them to continue to work with the new driver. These last two points where a Toolkit or driver for LabVIEW version X may also include VI libraries for LabVIEW version X-1,2,.. installations requires however that the older LabVIEW version is already installed on the system. Installing older versions of LabVIEW after newer versions is not a very good idea. Rolf Kalbermatter
  4. The solution to this are subVIs and state machines and in my case also LV 2 style globals to encapsulate specific data and its operation on them. With this you can always get diagrams that fit onto a 1024*756 screens. My LabVIEW programs involving sometimes 800 and more VIs of all sorts of complexity seldom go over this margin and if they do it is only about the outer case or loop structure on one or two sides but never real code. An no, this does not require globals at all. I only use globals for some simple status variables such as a global application abort and application constants but never for other data variables. If I have to grade a diagram made by someone else, any globals other than simple booleans, or application constants automatically give negative points. So does not using state machines for user interface handling or not using subVIs. (and spaghetti code also scores high in my negative list). Rolf Kalbermatter
  5. There is no native toolbar functionality built into LabVIEW yet. You will have to create the toolbar by adding small customized buttons to the upper border of your front panel and adding event support in your event handler for each of those buttons to trigger the corresponding action in your UI state machine. Rolf Kalbermattet
  6. I wouldn't go and add datasocket to the complexity of the system. A simple TCP/IP server in the RT app similar to the DataServer/Client example will probably work much more realiable and not add much overhead to your app. While I didn't do that on RT systems yet, I often add some TCP/IP server functionality to other apps to allow them to be monitored from all kinds of clients. Rolf Kalbermatter
  7. Unicode doesn't solve every problem there is. First there is not one single unicode standard. While Microsoft standardized on 16 bit Unicode (which incidentially does not have enough code space to represent every possible character on earth with a single unique code) Unix usually does standardize on 32 bit Unicode. Also the Unicode collation tables used by Microsoft do have some significant differences from the ones proposed by the Unicode organization. So implementing Unicode support in LabVIEW will NOT bring a single unified Unicode system across all the supported platforms but instead make it just about as difficult to write text in LabVIEW in a way which does show the same characters on all supported platforms as it is now. LabVIEW itself uses multibyte characters to support non western code pages (with the help of the underlying OS) and that while not perfect serves almost as good as a Unicode solution could do. And the biggest problem with global applications is not the character set but the different orientation some of the languages have in written text. For this there is no really good working solution yet, which would allow to write applications that can adapt to the different orientations by a flick of a switch and I doubt there is really a possible solution for this that can figure out this automatically. Rolf Kalbermatter
  8. You summed up most of the negative sides of LLBs and got the rest of them in the other posts. I think using LLBs nowadays during development is a big no-no. I do use them sometimes for distribution of function libraries AFTER the development is finished and I wouldn't expect many changes anymore. Still this is only for distribution. The actual source code for further development or bug fixing is always kept in a directory instead and archived as such. Rolf Kalbermatter
  9. It really depends what you want to do. A LabVIEW dialog is quite different from a Windows dialog. With Windows dialogs you could in principle use the Windows API to search for the default control to send a message to it to dismiss it. This wouldn't work for LabVIEW dialogs, since LabVIEW controls are not standard Windows widget controls but fully custom implemented by LabVIEW itself. The easiest way would be to post a return or esc key press to the keyboard queue. This assumes that the dialog has these keys assigned to its OK and Cancel buttons (almost alsways the case for Windows dialogs, but in LabVIEW dialogs implemented by VIs written by the application developer, this sepcifically has to be assigned by the developer). If this wouldn't work, you would have to distinguish between Windows dialogs where you would have to enumerate the controls in the dialog and then send a message to the correct control using Windows API functions or a LabVIEW dialog where you would use VI server to do the same. Rolf Kalbermatter
  10. You can't really create Windows compatible binaries with the normal Unix versions of GCC. Ok, if you would be intimately familiar with all the in and outs of the Wine project you might be able to do that, but I doubt there are more than a few dozen people world wide who could do that. What you will need is at least a tool chain such as MinGW, with special support for Windows Portable Executable file format. Most easily it is done with Visual C. Rolf Kalbermatter
  11. No!!!! 640*480 pixels = 300k pixels then make this color so you have probably at least 24 bits makes it already 900 k bytes and then real time means 25 frames per second at least, so do the math. Not to mention that you need at least double the network bandwidth of what you want to put through it. There are solutions to get real time video of this size through a normal network but they are special streaming protocols with patented compression algorithmes and not suited to be implemented in LabVIEW at all, aside from the royalities you would have to pay for such a solution. Rolf Kalbermatter
  12. };has a sizeof(struct example) = 8 in most "C" implementations, but a "Flatten to String" and TCP Write generate only six bytes of output. So I've had to clean up those poorly aligned structures and stick a padding short between VariableA and VariableB. I have a couple of new questions I'll post in more appropriate places (feel free to help out some more!), but I wanted to express my gratitude for getting me "unstuck". Thank you! Yes LabVIEW uses byte packing on all platforms except Sparc stations as far as I know. This is because Intel and PowerPC CPUs have generally no big penalty in accessing operands on other boundaries than its integral operand size but a SPARC CPU has a huge penalty in those cases. Rolf Kalbermatter
  13. This VI just calls into a private LabVIEW function with the Call Library Node. Not much you could learn from this other than that you can actually call into LabVIEW itself with the Call Library Node to call any of the functions documented in the External Code Manual. NI password protects these VIs because it considers those exported functions private (they are not documented in the External Code Manual) and wants to reserve the right to change or remove them in the future. Yes the data of a Picture Control is the stream of drawing commands. And its value therefore will be the last drawing command stream passed to it. Instead of flattening the data and parsing it you can just as well create a local variable and read the data and you will see the last stream of drawing commands passed to the control. No, the Picture Control is a fully build in LabVIEW control. It was added in around LabVIEW 3 to the build in controls of LabVIEW but at that time was a separate toolkit which installed the FP control and the Picture Control Toolkit functions into the vi.lib directory. Still the control was actually part of the LabVIEW executable itself. However it was added into the standard LabVIEW distribution later on, but it is a LabVIEW control just as the numeric control or any of the graph controls with exception of the 3D control. But I think this control doesn't even maintain a pen location at all after the drawing command stream has been evaluated. Wouldn't know for what it should maintain that as it directly translates the drawing stream commands into Winows GDI drawing commands. Try to do a Draw Line function without a Move Pen function first and you will see that the line always starts at 0,0. Rolf Kalbermatter
  14. LabVIEW does some optimizing when writing to terminals and local variables. Unless you set the control to update synchronously, LabVIEW only posts the new data to a buffer and signals the control that it should update without waiting for the control to redraw. The actual redrawing is done in a different thread, the UI thread, not more than 50 times a second if necessary at all. This is still much faster than even the fastest human could distinguish the data. For the Value property LabVIEW does nothing at all like this. The property node will hang in there and only return to the diagram, after the new values have been redrawn in the control, which for a graph is typically a very lengthy operation. So for the terminal and local variable the diagram can continue to execute other code or in the case of a loop recalculate many new data before the graph is completely redrawn, while for a value property the actual redrawing of the graph will limit the speed the loop can execute. Consider for updating different controls depending on some other condition a case structure instead. Your programs written in such ways will typically perform much better. Rolf Kalbermatter
  15. Not that you would probably get much from the diagram! The actual code is most probably almost all implemented in C inside a Call Library Node or a Code Interface Node. Rolf Kalbermatter
  16. While it will be virtually impossible to create a LabVIEW routine which could beat an optimized C routine, the speed difference in general is not very large, as long as you understand how LabVIEW handles the data. LabVIEW always uses resizable data structures for arrays while in C you typically work with preallocated memory chunks. The reason is simply because in C you have to deal with memory allocation and deallocation anyhow you usually won't even think about reallocating an array in a loop as you add new values to it. LabVIEW doing all the memory allocation stuff for you, it is easy to build a loop where you use the Build Array function to construct an array and then people are surprised that this loop takes ages to execute. Instead using the auto indexing feature on the tunnel of the loop border (or sometimes for more complicated solutions doing a preallocation of the array with a shift register on the loop and a Replace Array function inside the loop) will basically create a loop which does at least as well as a non-optimizing C compiler will do. So in general speed difference in typical applications between LabVIEW and C, if you know what you are doing in LabVIEW, is minimal. The only areas where LabVIEW usually will be hard to code in a way which comes close to a well programmed C code program is for algorithmes with lots of bit manipulations and complicated array manipulations. The most important difference between LabVIEW and C in my opinion is that you can write quite extensive LabVIEW programs, albeit quite often with very bad performance and architecture, eventough you have no idea about programming, whereas you need at least a good basic idea about programming before you can create even the most simple C program. Java having similar high level advantages as LabVIEW such as implicit memory management etc. will basically suffer from the same issues where you can write very bad performing routines if you don't know how to make the algorithme in such a way to help Java use the best programming structures for the problem at hand. Rolf Kalbermatter
  17. I'm very curious about remote debugging ;-) Rolf Kalbermatter
  18. Aah, they do that already!! It is called garbage collection and kicks in as soon as the top level VI invoking the Open refnum function is going idle. This is actually a feature you can't turn off other than for VISA refnums (yeah I know they are now VISA resource names but the underlying mechanism is still a refnum) in the Options->Miscellaneous dialog. Rolf Kalbermatter
  19. I hope you do not consider my remarks as saying IVision is a hobby project. It is far from that! What I was saying above is that a lot of professional users will prefer IMAQ Vision because it is from a well known party (namely the makers of LabVIEW), and they feel more comfortable to deal with a company of that size, which gives them a certain feel of safety. This is not just about IVision but in fact about any Toolkit ever released for LabVIEW. The NI version doesn't necessarily have to be better to sell, whereas as an independant developer you really need to have some really good aces in your sleeve to get even a small part of the market. This is and has been the case since the beginnings of 3rd-party LabVIEW toolkits although back in the early days it may have been a little easier sometimes. Though just consider that the two most successful 3rd-party Toolkits (Radius or so Database Toolkit and Graftek Vision Toolkit) really got bougth after some time by NI, and the rest have died or suffer a low volume existence hardly paying the administrative costs for distributing them. Some people will find IMAQ Vision to expensive for what they want to do and will be looking at IVision. The few who are left after that because even IVision is considered to expensive (for professional use that is as for the rest the evalution version probably will work well) are probably not very interesting to carry a project of this size. They will consist of people doing this just for a hobby or being in education where time spent is not an issue. However even for the last group this project is way larger than what a typical student could possibly manage in a semester work or something like that. And this would assume that such a student already has quite some knowledge about LabVIEW, LabVIEW external code, C programming and image analysis theory. I for my part can shine in the first three categories but lack in the last one considerable knowledge to carry such a project myself. Besides I have other priorities at the moment. Rolf Kalbermatter
  20. If you control both sides of the connection it would be simple to create two small applciations to do that. IF the other side is however already provided it will all depend on what protocol this applciation does support and if that protocol has some file transfer capabilities such as FTP would have. Rolf Kalbermatter
  21. You can call external VIs from an executable using VI server. However you have to be aware of a few things. 1) The VI must be in the same version of LabVIEW as was use to create the executable. 2) Calculating the path of an external plugin VI in respect to the VI inside the executable needs to take into account that a VI in an executable is always at the path <application directory>/<your exececutable>.exe/<your VI>.vi So assuming the plugin VIs are located in the same directory as the executable file you need to strip two times from the path you get from the This VI Path node and then append the VI name of your plugin. 3) The plugin must be able to reference any and every subVI directly. This means any subVI used by the plugin not already contained in the executable needs to be copied at a place where the VI can find them. Just copying the plugin VI into the application directory will in most cases not work. Better would be to load the plugin VI in LabVIEW and then Using Save with Options save the entire hierarchy to a new location, using LLBs, including vi.lib and any other option in the selection in the dialog and then save it. Move that LLB into the executable directory and adjust the path mentioned in point 2) to also account for the new LLB name in the path. This solution isn't very nice in that each plugin will contain all subVIs eventhough they might be already present in another plugin or the executable itself. It will however run as long as you don't happen to create two different subVIs having the same name. To get a more elaborate plugin distribution with plugin specific functions in the plugin LLB and the rest being added to the executable or some support LLB is far beyond a short esxplanation like this. It basically won't be possible without some custom made tools, where the OpenG Builder may actually be a good starting point to go from there. Rolf Kalbermatter
  22. It probably won't help in this situation alone. I believe that the location and version of the Math Kernal Library is registered in the registry too at install time and this is used on load time of the lvanlys.dll to link to the appropriate DLL. This is the reason you get the different error message as the DLL can't find the registry entries or the DLLs and therfore has to abort as it really is only a wrapper around the Math Kernel libraries. Of course you can try to figure this all out but chances are that the Math Kernel Libary or something else will change again in the next LabVIEW version and then you can either decide to stay with your current LabVIEW version, try to figure out the new dependencies, or go with the flow and use the Installer anyhow. Expect this to be a never ending battle if you don't want to go with the Runtime Sytem Installer. Rolf Kalbermatter
  23. The version information is just another resource in the executable image resource table. However there is no built in functionality in Windows to add resources to exectuables files as far as I know. For this you need so called resource compilers which are usually part of any C/C++ development environment. I believe there are open source variants around although their quality and ease of use as external tool from other applications does vary. One possible canditate should be contained in the MingW tool chain which is a Windows compiler and linker based on the every so popular Gnu C compiler tool chain. Rolf Kalbermatter
  24. Very simple. Create two LabVIEW strings with the correct size. You can use the Initialize Array function to create two arrays of the necessary size of U8 numbers with the value 0. Then use the Byte Array To String function to convert these arrays in a string, place a Call Library Node on the diagram to call your _getUser function and pass the two strings at the appropriate location to that Call Library Node configuring them to be a C String pointer. LabVIEW will take care about passing the correct pointer to the DLL and on return will truncate the string to the actual filled in information up to the terminating 0 byte character. Basically the calloc is done through the Initialize Array function and the free is done implicitedly by LabVIEW as soon as the string is not anymore used in the diagram. Since the allocation and freeing of the memory is done by the caller (here LabVIEW) there is no need to use a particular memory manager instance. The function will just take whatever memory is provided to it and fill it in with whatever information it has. Rolf Kalbermatter
  25. Which could be quite some time! Zooming pictures is not a simple thing. For instance using a bilinear interpolation works quite nicely for photo-like pictures but creates terribly blurry images for vector graphics such as a LabVIEW diagram, and besides it is rather computation extensive. Other algorithmes such as resampling the picture are reasonably nice for vector graphics but produce really bad artefacts for photo images when the zoom factor is not an integer multiple. The advantage of this algorithme is the computational speed of it. Since NI does have a software package addon (IMAQ Vision) which can work very well with photographic images and has a large amount of options for zooming pictures, the need to add sophisticated zooming capabilities to the picture control is rather limited and therefore it is likely that many other more high priority features will be tackled first before this. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.