Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. QUOTE (jfazekas @ Jan 26 2009, 02:24 PM) Not sure about LV Class but a typedef in itself won't help. What you should try to do is passing your array in and out of VIs. Avoid branching as much as possible unless you branch of inside a structure to some non-reusing LabVIEW internal nodes such as Index Array, Array Size and similar. Bascially you should try to have the array as one wire going through your entire application. If you need to create a branch make sure it is in the same structure as the function that consumes the branch. You might branch to determine the size of the array but if you do that outside of the structre while the Array Size VI is inside a structure LabVIEW will likely create a copy. If you have loops to operate on the array create a shift register and wire the array to the left terminal wiring it from that terminal to the inside of the loop and making sure to wire it inside the loop back to the right terminal. When the loop finishes you just get the array from the right terminal and go to the next function. If you do this right LabVIEW will usually already avoid data copies even without using the Inplace Structure. In fact the Inplace Structure does not so much optimize the LabVIEW access (it does some extra optimizations) as much more enforce this type of wiring more strictly. With these techniques I have created VI libraries operating on huge multi MByte Arrays in speeds comparable what fairly optimized C algorithmes could perform even before the Inplace functions existed. Rolf Kalbermatter
  2. QUOTE (pallen @ Jan 26 2009, 01:43 PM) Most likely a graphics driver issue. LabVIEW does direct X Windows drawing and depending on the graphics driver used this might cause such issues. Try experimenting with the graphics driver settings, such as color depth, acceleration and such. Rolf Kalbermatter
  3. Sometimes see it too. Usually a recompile (Ctrl - run button) of the VI fixes it. But I'm not using LVGOOP so this might be another source of this problem that might not be fixed with a recompile. Rolf Kalbermatter
  4. QUOTE (MJE @ Feb 3 2009, 11:43 PM) They would get lynched by even more folks for "dictating" the font they have to work with, even if it would be configurable and just a default setting. QUOTE (jdunham @ Feb 4 2009, 12:09 PM) Yeah, I agree. When we build our application, we make sure those fonts are in the application's "labview.ini" file, because everything looks wretched otherwise. Forget about any kind of cross-platform GUIs. It sure would have been nice for NI to have dealt with this a bit better, though I know fonts have always been a pain for them. Not just for them. Fonts are a pain whenever you have to deal with them in any software. It's already bad when you just need to make font metrics work but gets impossible if you need to allow changing them. I rather have them use their time on something useful than trying to fix something impossible. Rolf Kalbermatter
  5. QUOTE (ejensen @ Feb 4 2009, 02:09 PM) The application builder stumbles over something that it does not expect since it usually doesn't happen but is caused by the workaround you have employed. Most likely it is the VI library code used in the Librarian VIs to deal with LLBs. That code contains some specific file ending checks that will fail on files with a DLL extension causing the following code to work badly when the file already exists. So the application builder will need to be fixed to workaround a bug caused by another workaround Rolf Kalbermatter
  6. QUOTE (ACS @ Jan 26 2009, 05:59 PM) I'm pretty sure that the LabVIEW runtime has absolutely no way of building a target of any form, PDA or not. In fact any LabVIEW runtime will not have that ability. That is something that requires LabVIEW development environment features that can't just be executed in the runtime system. There is no runtime system in the world that I would know of that could build itself or something similar out of the box. You do need the according development toolchain for that. Rolf Kalbermatter
  7. QUOTE (geoff @ Jan 28 2009, 03:36 PM) LabVIEW realtime is running on either the Pharlap ETS or VxWorks for PPC. VxWorks for PPC is out of question since NI does not support using that on non NI hardware and your system will most likely be an x86 based CPU. So in theory you could use the Pharlap (now I think called Ardence) ETS system on your hardware. In practice this is quite difficult though. Pharlap ETS as used by LabVIEW RT has specific requirements to the employed hardware such as supporting only certain chipsets and especially ethernet controllers. So you will have to really confirm with some NI specialist that your intended hardware (don't expect them to specify a PC-104 system for you as they do rather sell their own hardware) will be compatible in all aspects. Expect to be able to tell them exactly about chipsets and low level details of your system. Telling them just that you have a PC104 system xyz from vendor abc will not help as they will not be going to spend much time to try to find out all those low level details themselves. There is also a thorough list of specs somewhere on the NI site of what a hardware platform must consist of to be able to install and use Pharlap ETS on it. Be aware that since you are not using NI hardware you will also need to purchase the Pharlap ETS runtime license, that comes included with any NI hardware. If you have been gone through all this, confirmed that it will run and installed everything the next challenge will be the inclusion of your analog and digital IO. I can understand that a vendor does not feel about supporting LabVIEW RT very much since the potential volume especially for PC104 hardware is very small and the effort not. Writing your own drivers even when using just inport and outport will be a true challenge and since you would be using it with inport and outport you should not expect high speed data acquisition of any sorts. For reading and writing digital IO and single analog values it will work but forget about timed data acquisition. For that you need real drivers in the form of DLLs that can run on the Pharlap ETS system. And if you get that DLL you will need to get also a stub DLL for Windows exporting the same functions that do nothing in order to be able to develop on your host system the VIs that call that DLL. All in all this might be an interesting project to do if you have lots of time and/or the potential money saving by using this hardware instead of NI hardware pays off because you are going to deploy this system many thousand times. But even then you should check out NI hardware because if you are going to talk about such numbers they will be happy to come up with quite competitive offers. Rolf Kalbermatter
  8. QUOTE (nitulandia @ Feb 2 2009, 04:02 PM) I'm surprised that the directory where the project file is located should work but if it does that is some special dealing LabVIEW is doing to inform .Net of additional paths. The default and first search location of .Net for assemblies is however in the current executables directory. This is NOT where your VIs are. This is where the executable is located that created the current process. For the LabVIEW development system this would be the directory where your LabVIEW.exe is located. For a built app this is where your myapp.exe is located. Try this out to see if it would help with the current .Net DLLs. Your Installer may put the .Net DLLs in the Global Assembly Cache (GAC). This is the second location searched by .Net for any .Net DLL if the first fails. But in order to be able to install .Net DLLs into the GAC they need to be strongly named (meaning they have a fully defined version resource and all). These two locations (the executable directory and the GAC) are the only locations .Net will normally look for required DLLs. LabVIEW may do some extra magic to tell .Net to consider the project directory too, but this is in fact something that MSDN does advise to not do , because it is an extra possibility to open the gates to DLL hell again. Rolf Kalbermatter
  9. QUOTE (ragu @ Feb 3 2009, 12:10 AM) This is not a simple topic. The right answers for you will depend on how much you already know about LabVIEW, C programming, OS details etc. Also the information is out there. There are several good Knowledge Base or Application Note articles on www.ni.com that have lots of details in these respects. Some of them go to details you will only want to know if you have a really sound understanding of how memory management is done in C programs. The WIKI here on LAVA has also some good information which is not as in depth but should be good for an average LabVIEW programmer. Go to events like LabVIEW days, or User Group Meetings. There are often presentation about advanced topics with LabVIEW such as how to make a good performing application versus a bad one which is also dependent on some basic understanding about LabVIEW memory management. Go and search for this info and come back when you have more specific questions. We can not give you a 10 hour lecture about this topic here and it would not give you all the information that is possible anyhow. Rolf Kalbermatter
  10. QUOTE (Mark Yedinak @ Jan 21 2009, 11:36 AM) If you are talking about the MS RPC protocol you may want to think again about that approach. This is not just a simple protocol but the full blown DCE RPC specification with MS proprietary extensions. The various network protocol layers that this all can be embedded in each other, and the various security providers that the different layers can use including several encryption layers will likely make this a never ending project to be implemented in LabVIEW. As Mark has explained RPC is just a term used for anything that can allow remote execution of code through some kind of network connection. In that sense even VI Server is an RPC protocol. So you really need to find out the dialect of RPC you are talking about. Rolf Kalbermatter
  11. QUOTE (bsvingen @ Jan 18 2009, 04:55 PM) The real reason why you need to compile the wrapper in C++ is that only C++ can resolve the C++ typical things about how to call object methods and access public object variables. A class is in fact a sort of structure but how these things are layed out in memory and organized is something very C++ specific a C compiler can't deal in easily. There are exceptions to this such as COM, where even the binary layout of the C++ object is strictly defined so that given proper C accessor macros you can actually access a COM object both from C and C++, but that is a specific implementation of object classes in an attempt to make them compiler independent. The problem of mutations of the C++ object and the change in possible methods is solved there with the introduction of interface identifiers one has to query with the IUnknown interface and once you have released an interface you can not ever make changes to it but instead need to define a new interface and make that available too through the IUnknown QueryInterface method. For normally defined C++ object classes there is no real standard at all how the objects have to be organized in memory and how various things like virtual methods, inheritance and such are implemented on a binary level. This also makes it very difficult to call C++ link libs from a different compiler than the one who created them. I usually just define a opaque pointer in the wrapper, either void * or something like: struct <name>_t; typedef struct <name>_t *<name> The second does make it a bit more clear in the according function prototype what is meant to happen and requires an explicit cast in the call of any object method or public variable access but I wouldn't see whyyou would have to do any modifications on the original C++ source code at all. As far as LabVIEWs Call library Node is concerned those object pointers are just an uInt32 for me and in LabVIEW 8.6 you would even use the Unsigned Pointer-sized Integer, which will take care to keep your Call Library Node correct if you ever happen to go to a LabVIEW for 64 Bits OS and port your C++ library and wrapper to 64 bit too. Rolf Kalbermatter
  12. QUOTE (Dave Graybeal @ Jan 16 2009, 03:39 PM) Someone did bork up LabVIEW 8.6 mouse click position detection in various cases very badly. Seems a bit similar to the problem when you do have a tab control and try to place a free label on it. The location where it will be dropped is usually very different than where you clicked. I found the actual offset is in fact dependant on the distance of the top left corner of the tab control to the 0/0 point of the front panel/pane. And I recently read about another similar mouse click offset problem somewhere. Rolf Kalbermatter
  13. QUOTE (Poom-Thailand @ Jan 17 2009, 05:05 AM) If you have installed the Vision Development Toolkit from the LabVIEW DVD go into Add/Remove Programs Control Panel and do a repair of your install. If you haven't yet done so then install that package. Rolf Kalbermatter
  14. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  15. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  16. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  17. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  18. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  19. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  20. QUOTE (shoneill @ Jan 14 2009, 04:55 AM) I think he is either amusing himself very much here or completely beyond any help. Rolf Kalbermatter
  21. QUOTE (Antoine Châlons @ Jan 14 2009, 03:31 AM) Hmm didn't encounter that so far. I would expect the upgrade process to take care of that. It does with just about anything else with the immensely reworked File IO functions. Writing new code is of course another beast if you are used to the old default value Rolf Kalbermatter
  22. QUOTE (OlivierL @ Jan 13 2009, 12:16 PM) Build array can not cause a memory leak in the true sense of the word. A memory leak means really that memory got allocated and the reference got lost somehow without the memory being freed. The unlimited Build Array function is really just a memory hog meaning it will accumulate memory over and over without ever releasing it. It is not a memory leak in the sense that LabVIEW very well knows about it and will eventually release it if you unload the VI containing that huge accumulated array. QUOTE (jdunham @ Jan 8 2009, 05:52 PM) No, there are plenty of things the profiler won't report. I don't believe that shows global memory like queues and notifiers or stuff allocated by ActiveX nodes that you've invoked. Not sure about queues really. It might not track them. For memory allocated in external components like ActiveX servers or even DLLs the Profiler has no way of tracking that down really as it is not allocated under its control. Rolf Kalbermatter
  23. QUOTE (Variant @ Jan 13 2009, 06:38 AM) Can you be a bit more clear with your question? I for one still wonder what you meant by this sentence. Rolf Kalbermatter
  24. QUOTE (Maca @ Jan 13 2009, 01:43 AM) They would be violating copyright and their license if they did. And if he has an older PID Toolkit it should still work in 8.5. It used to be simply a collection of VIs in older versions with diagrams intact so 8.5 can read it without problem. Rolf Kalbermatter
  25. QUOTE (jgcode @ Jan 9 2009, 03:02 AM) It's a Beta after all! File a bug report at MS QUOTE (Jim Kring @ Jan 12 2009, 09:37 PM) I wonder if Vista will just be like Windows ME and we can all just agree to forget about it, eventually. I think that is what will happen eventually. However I don't think they can avoid things like UAC in Windows 7 but probably implement it a bit saner so that you won't have to click several times through a password dialog to simply install a software. Windows 7 seems to be supposed to be the rigorously redesigned OS that Vista should have been, before they found out that doing that is going to take so long that they won't have a successor to XP for maybe 8 years but needed to get something out sooner as they were already way behind the 3 year release schedule for a new Windows version. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.