Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,917
  • Joined

  • Last visited

  • Days Won

    271

Everything posted by Rolf Kalbermatter

  1. OPC probably never will run on real time. OPC is based on OLE and that is a huge system to add to a real time environment. As such it is probably not possible to create an OLE system that would not throw off any real time determinsme from the system. Also OLE is a closely protected core technology of Windows and Microsoft won't give out the source code just to anyone to port it to a competitive real time environment, even if porting it to a real time environment would technically be possible, what I highly doubt. So your options are actually to go with a completely VI based ModBus library. There are several Alliance Members that sell VI based ModBus libraries and NI recently developed one based on the ModVIEW library from CIT Engineering where I work. I'm not sure what NIs plans and conditions are for this library but you can also checkout http://www.citengineering.com/pagesEN/products/modbus.aspx. Rolf Kalbermatter
  2. This DLL is a COM DLL and therefore does not export functions to call directly. As such the Call Library Node won't work. If the DLL contains a so called type library or at least is installed with a type library you have a chance to use the Active X interface in LabVIEW. However Active X comes in two flavours. One are Active X Controls which provide some form of user interface and can be inserted in the Active X Container in LabVIEW. The other flavour are Automation Objects without any UI component and they can not be put in an Active X container. Instead you place an Active X refnum on the front panel and then browse from there to the actual object you want to use. Then you wire that refnum to the Open Automation Refnum function which calls the OLE function to instantiate the desired object through the DLL proviced object factory in DllGetClassObject(). Once Open Automation Refnum returns successfully you should be able to wire it to the Property Node or Method Node to select the approriate property to access or method to execute. Don't forget to close the refnum at the end with Close Refnum to avoid memory leaks. Rolf Kalbermatter
  3. I was looking into this lately and the scripting seems to be another feature which is put behind the new licensing scheme (together with XNodeDevelopment it seems). So there are really only two ways to get scripting in LabVIEW 8.0. The first is to get a license from NI to do that and activate it in the license manager, and the second would be illegal. Rolf Kalbermatter
  4. Yep, Firefox download worked well too. Wouldn't ever have tried the same with IE though. Even 50MB files used to be a one out of ten chance to work. Rolf Kalbermatter
  5. It doesn't seem to work overseas. Whatever serial number I could come up with from all the different SSP shipments etc. it kept telling me that it is an invalid serial number for the product I try to activate. NI support wasn't very clear but claimed that it doesn't work yet and I should just continue to use evaluation mode until I receive the proper SSP shipment. But that dialog at startup is for sure annoying. Rolf Kalbermatter
  6. Because in all LabVIEW versions up to 7.1 the application builder has no possibility to add VISA or any other IO library installer such as NI-DAQ, NI-488, etc. They all do come with the apropriate hardware or can be downloaed from the NI site or copied from the Device Driver CD-ROM. In the case of VISA or even worse NI-DAQ it is not very likely, that everybody would want his installer bloated with 100ds of MBs of driver installation, which someone might have to reinstall anyhow because there has been a new driver version released since you built your app. Rolf Kalbermatter
  7. LabVIEW itself has been always a nice citizen as far as undesired interaction with other installations on the same machine is concerned. I do not expect any difference with LabVIEW 8 unless the packagers at NI somehow messed up and built a bug in the installer. With add-on toolkits it is sometimes a different story. All native LabVIEW add-on toolkits should be fine but others such as IMAQ can be a bit problematic in certain cases. You may sometimes need to install certain bug fixes and definitely allow the Toolkit installer to also install compatibility VIs for older LabVIEW version into their respective directory. With drivers you have to be carefull to allow the driver installation (for instance NI-DAQ) to upgrade the (DAQ) VIs in the older LabVIEW systems, to sometimes allow them to continue to work with the new driver. These last two points where a Toolkit or driver for LabVIEW version X may also include VI libraries for LabVIEW version X-1,2,.. installations requires however that the older LabVIEW version is already installed on the system. Installing older versions of LabVIEW after newer versions is not a very good idea. Rolf Kalbermatter
  8. The solution to this are subVIs and state machines and in my case also LV 2 style globals to encapsulate specific data and its operation on them. With this you can always get diagrams that fit onto a 1024*756 screens. My LabVIEW programs involving sometimes 800 and more VIs of all sorts of complexity seldom go over this margin and if they do it is only about the outer case or loop structure on one or two sides but never real code. An no, this does not require globals at all. I only use globals for some simple status variables such as a global application abort and application constants but never for other data variables. If I have to grade a diagram made by someone else, any globals other than simple booleans, or application constants automatically give negative points. So does not using state machines for user interface handling or not using subVIs. (and spaghetti code also scores high in my negative list). Rolf Kalbermatter
  9. There is no native toolbar functionality built into LabVIEW yet. You will have to create the toolbar by adding small customized buttons to the upper border of your front panel and adding event support in your event handler for each of those buttons to trigger the corresponding action in your UI state machine. Rolf Kalbermattet
  10. I wouldn't go and add datasocket to the complexity of the system. A simple TCP/IP server in the RT app similar to the DataServer/Client example will probably work much more realiable and not add much overhead to your app. While I didn't do that on RT systems yet, I often add some TCP/IP server functionality to other apps to allow them to be monitored from all kinds of clients. Rolf Kalbermatter
  11. Unicode doesn't solve every problem there is. First there is not one single unicode standard. While Microsoft standardized on 16 bit Unicode (which incidentially does not have enough code space to represent every possible character on earth with a single unique code) Unix usually does standardize on 32 bit Unicode. Also the Unicode collation tables used by Microsoft do have some significant differences from the ones proposed by the Unicode organization. So implementing Unicode support in LabVIEW will NOT bring a single unified Unicode system across all the supported platforms but instead make it just about as difficult to write text in LabVIEW in a way which does show the same characters on all supported platforms as it is now. LabVIEW itself uses multibyte characters to support non western code pages (with the help of the underlying OS) and that while not perfect serves almost as good as a Unicode solution could do. And the biggest problem with global applications is not the character set but the different orientation some of the languages have in written text. For this there is no really good working solution yet, which would allow to write applications that can adapt to the different orientations by a flick of a switch and I doubt there is really a possible solution for this that can figure out this automatically. Rolf Kalbermatter
  12. You summed up most of the negative sides of LLBs and got the rest of them in the other posts. I think using LLBs nowadays during development is a big no-no. I do use them sometimes for distribution of function libraries AFTER the development is finished and I wouldn't expect many changes anymore. Still this is only for distribution. The actual source code for further development or bug fixing is always kept in a directory instead and archived as such. Rolf Kalbermatter
  13. It really depends what you want to do. A LabVIEW dialog is quite different from a Windows dialog. With Windows dialogs you could in principle use the Windows API to search for the default control to send a message to it to dismiss it. This wouldn't work for LabVIEW dialogs, since LabVIEW controls are not standard Windows widget controls but fully custom implemented by LabVIEW itself. The easiest way would be to post a return or esc key press to the keyboard queue. This assumes that the dialog has these keys assigned to its OK and Cancel buttons (almost alsways the case for Windows dialogs, but in LabVIEW dialogs implemented by VIs written by the application developer, this sepcifically has to be assigned by the developer). If this wouldn't work, you would have to distinguish between Windows dialogs where you would have to enumerate the controls in the dialog and then send a message to the correct control using Windows API functions or a LabVIEW dialog where you would use VI server to do the same. Rolf Kalbermatter
  14. You can't really create Windows compatible binaries with the normal Unix versions of GCC. Ok, if you would be intimately familiar with all the in and outs of the Wine project you might be able to do that, but I doubt there are more than a few dozen people world wide who could do that. What you will need is at least a tool chain such as MinGW, with special support for Windows Portable Executable file format. Most easily it is done with Visual C. Rolf Kalbermatter
  15. No!!!! 640*480 pixels = 300k pixels then make this color so you have probably at least 24 bits makes it already 900 k bytes and then real time means 25 frames per second at least, so do the math. Not to mention that you need at least double the network bandwidth of what you want to put through it. There are solutions to get real time video of this size through a normal network but they are special streaming protocols with patented compression algorithmes and not suited to be implemented in LabVIEW at all, aside from the royalities you would have to pay for such a solution. Rolf Kalbermatter
  16. };has a sizeof(struct example) = 8 in most "C" implementations, but a "Flatten to String" and TCP Write generate only six bytes of output. So I've had to clean up those poorly aligned structures and stick a padding short between VariableA and VariableB. I have a couple of new questions I'll post in more appropriate places (feel free to help out some more!), but I wanted to express my gratitude for getting me "unstuck". Thank you! Yes LabVIEW uses byte packing on all platforms except Sparc stations as far as I know. This is because Intel and PowerPC CPUs have generally no big penalty in accessing operands on other boundaries than its integral operand size but a SPARC CPU has a huge penalty in those cases. Rolf Kalbermatter
  17. This VI just calls into a private LabVIEW function with the Call Library Node. Not much you could learn from this other than that you can actually call into LabVIEW itself with the Call Library Node to call any of the functions documented in the External Code Manual. NI password protects these VIs because it considers those exported functions private (they are not documented in the External Code Manual) and wants to reserve the right to change or remove them in the future. Yes the data of a Picture Control is the stream of drawing commands. And its value therefore will be the last drawing command stream passed to it. Instead of flattening the data and parsing it you can just as well create a local variable and read the data and you will see the last stream of drawing commands passed to the control. No, the Picture Control is a fully build in LabVIEW control. It was added in around LabVIEW 3 to the build in controls of LabVIEW but at that time was a separate toolkit which installed the FP control and the Picture Control Toolkit functions into the vi.lib directory. Still the control was actually part of the LabVIEW executable itself. However it was added into the standard LabVIEW distribution later on, but it is a LabVIEW control just as the numeric control or any of the graph controls with exception of the 3D control. But I think this control doesn't even maintain a pen location at all after the drawing command stream has been evaluated. Wouldn't know for what it should maintain that as it directly translates the drawing stream commands into Winows GDI drawing commands. Try to do a Draw Line function without a Move Pen function first and you will see that the line always starts at 0,0. Rolf Kalbermatter
  18. LabVIEW does some optimizing when writing to terminals and local variables. Unless you set the control to update synchronously, LabVIEW only posts the new data to a buffer and signals the control that it should update without waiting for the control to redraw. The actual redrawing is done in a different thread, the UI thread, not more than 50 times a second if necessary at all. This is still much faster than even the fastest human could distinguish the data. For the Value property LabVIEW does nothing at all like this. The property node will hang in there and only return to the diagram, after the new values have been redrawn in the control, which for a graph is typically a very lengthy operation. So for the terminal and local variable the diagram can continue to execute other code or in the case of a loop recalculate many new data before the graph is completely redrawn, while for a value property the actual redrawing of the graph will limit the speed the loop can execute. Consider for updating different controls depending on some other condition a case structure instead. Your programs written in such ways will typically perform much better. Rolf Kalbermatter
  19. Not that you would probably get much from the diagram! The actual code is most probably almost all implemented in C inside a Call Library Node or a Code Interface Node. Rolf Kalbermatter
  20. While it will be virtually impossible to create a LabVIEW routine which could beat an optimized C routine, the speed difference in general is not very large, as long as you understand how LabVIEW handles the data. LabVIEW always uses resizable data structures for arrays while in C you typically work with preallocated memory chunks. The reason is simply because in C you have to deal with memory allocation and deallocation anyhow you usually won't even think about reallocating an array in a loop as you add new values to it. LabVIEW doing all the memory allocation stuff for you, it is easy to build a loop where you use the Build Array function to construct an array and then people are surprised that this loop takes ages to execute. Instead using the auto indexing feature on the tunnel of the loop border (or sometimes for more complicated solutions doing a preallocation of the array with a shift register on the loop and a Replace Array function inside the loop) will basically create a loop which does at least as well as a non-optimizing C compiler will do. So in general speed difference in typical applications between LabVIEW and C, if you know what you are doing in LabVIEW, is minimal. The only areas where LabVIEW usually will be hard to code in a way which comes close to a well programmed C code program is for algorithmes with lots of bit manipulations and complicated array manipulations. The most important difference between LabVIEW and C in my opinion is that you can write quite extensive LabVIEW programs, albeit quite often with very bad performance and architecture, eventough you have no idea about programming, whereas you need at least a good basic idea about programming before you can create even the most simple C program. Java having similar high level advantages as LabVIEW such as implicit memory management etc. will basically suffer from the same issues where you can write very bad performing routines if you don't know how to make the algorithme in such a way to help Java use the best programming structures for the problem at hand. Rolf Kalbermatter
  21. I'm very curious about remote debugging ;-) Rolf Kalbermatter
  22. Aah, they do that already!! It is called garbage collection and kicks in as soon as the top level VI invoking the Open refnum function is going idle. This is actually a feature you can't turn off other than for VISA refnums (yeah I know they are now VISA resource names but the underlying mechanism is still a refnum) in the Options->Miscellaneous dialog. Rolf Kalbermatter
  23. I hope you do not consider my remarks as saying IVision is a hobby project. It is far from that! What I was saying above is that a lot of professional users will prefer IMAQ Vision because it is from a well known party (namely the makers of LabVIEW), and they feel more comfortable to deal with a company of that size, which gives them a certain feel of safety. This is not just about IVision but in fact about any Toolkit ever released for LabVIEW. The NI version doesn't necessarily have to be better to sell, whereas as an independant developer you really need to have some really good aces in your sleeve to get even a small part of the market. This is and has been the case since the beginnings of 3rd-party LabVIEW toolkits although back in the early days it may have been a little easier sometimes. Though just consider that the two most successful 3rd-party Toolkits (Radius or so Database Toolkit and Graftek Vision Toolkit) really got bougth after some time by NI, and the rest have died or suffer a low volume existence hardly paying the administrative costs for distributing them. Some people will find IMAQ Vision to expensive for what they want to do and will be looking at IVision. The few who are left after that because even IVision is considered to expensive (for professional use that is as for the rest the evalution version probably will work well) are probably not very interesting to carry a project of this size. They will consist of people doing this just for a hobby or being in education where time spent is not an issue. However even for the last group this project is way larger than what a typical student could possibly manage in a semester work or something like that. And this would assume that such a student already has quite some knowledge about LabVIEW, LabVIEW external code, C programming and image analysis theory. I for my part can shine in the first three categories but lack in the last one considerable knowledge to carry such a project myself. Besides I have other priorities at the moment. Rolf Kalbermatter
  24. If you control both sides of the connection it would be simple to create two small applciations to do that. IF the other side is however already provided it will all depend on what protocol this applciation does support and if that protocol has some file transfer capabilities such as FTP would have. Rolf Kalbermatter
  25. You can call external VIs from an executable using VI server. However you have to be aware of a few things. 1) The VI must be in the same version of LabVIEW as was use to create the executable. 2) Calculating the path of an external plugin VI in respect to the VI inside the executable needs to take into account that a VI in an executable is always at the path <application directory>/<your exececutable>.exe/<your VI>.vi So assuming the plugin VIs are located in the same directory as the executable file you need to strip two times from the path you get from the This VI Path node and then append the VI name of your plugin. 3) The plugin must be able to reference any and every subVI directly. This means any subVI used by the plugin not already contained in the executable needs to be copied at a place where the VI can find them. Just copying the plugin VI into the application directory will in most cases not work. Better would be to load the plugin VI in LabVIEW and then Using Save with Options save the entire hierarchy to a new location, using LLBs, including vi.lib and any other option in the selection in the dialog and then save it. Move that LLB into the executable directory and adjust the path mentioned in point 2) to also account for the new LLB name in the path. This solution isn't very nice in that each plugin will contain all subVIs eventhough they might be already present in another plugin or the executable itself. It will however run as long as you don't happen to create two different subVIs having the same name. To get a more elaborate plugin distribution with plugin specific functions in the plugin LLB and the rest being added to the executable or some support LLB is far beyond a short esxplanation like this. It basically won't be possible without some custom made tools, where the OpenG Builder may actually be a good starting point to go from there. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.