Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,892
  • Joined

  • Last visited

  • Days Won

    267

Everything posted by Rolf Kalbermatter

  1. QUOTE (ejensen @ Feb 4 2009, 02:09 PM) The application builder stumbles over something that it does not expect since it usually doesn't happen but is caused by the workaround you have employed. Most likely it is the VI library code used in the Librarian VIs to deal with LLBs. That code contains some specific file ending checks that will fail on files with a DLL extension causing the following code to work badly when the file already exists. So the application builder will need to be fixed to workaround a bug caused by another workaround Rolf Kalbermatter
  2. QUOTE (ACS @ Jan 26 2009, 05:59 PM) I'm pretty sure that the LabVIEW runtime has absolutely no way of building a target of any form, PDA or not. In fact any LabVIEW runtime will not have that ability. That is something that requires LabVIEW development environment features that can't just be executed in the runtime system. There is no runtime system in the world that I would know of that could build itself or something similar out of the box. You do need the according development toolchain for that. Rolf Kalbermatter
  3. QUOTE (geoff @ Jan 28 2009, 03:36 PM) LabVIEW realtime is running on either the Pharlap ETS or VxWorks for PPC. VxWorks for PPC is out of question since NI does not support using that on non NI hardware and your system will most likely be an x86 based CPU. So in theory you could use the Pharlap (now I think called Ardence) ETS system on your hardware. In practice this is quite difficult though. Pharlap ETS as used by LabVIEW RT has specific requirements to the employed hardware such as supporting only certain chipsets and especially ethernet controllers. So you will have to really confirm with some NI specialist that your intended hardware (don't expect them to specify a PC-104 system for you as they do rather sell their own hardware) will be compatible in all aspects. Expect to be able to tell them exactly about chipsets and low level details of your system. Telling them just that you have a PC104 system xyz from vendor abc will not help as they will not be going to spend much time to try to find out all those low level details themselves. There is also a thorough list of specs somewhere on the NI site of what a hardware platform must consist of to be able to install and use Pharlap ETS on it. Be aware that since you are not using NI hardware you will also need to purchase the Pharlap ETS runtime license, that comes included with any NI hardware. If you have been gone through all this, confirmed that it will run and installed everything the next challenge will be the inclusion of your analog and digital IO. I can understand that a vendor does not feel about supporting LabVIEW RT very much since the potential volume especially for PC104 hardware is very small and the effort not. Writing your own drivers even when using just inport and outport will be a true challenge and since you would be using it with inport and outport you should not expect high speed data acquisition of any sorts. For reading and writing digital IO and single analog values it will work but forget about timed data acquisition. For that you need real drivers in the form of DLLs that can run on the Pharlap ETS system. And if you get that DLL you will need to get also a stub DLL for Windows exporting the same functions that do nothing in order to be able to develop on your host system the VIs that call that DLL. All in all this might be an interesting project to do if you have lots of time and/or the potential money saving by using this hardware instead of NI hardware pays off because you are going to deploy this system many thousand times. But even then you should check out NI hardware because if you are going to talk about such numbers they will be happy to come up with quite competitive offers. Rolf Kalbermatter
  4. QUOTE (nitulandia @ Feb 2 2009, 04:02 PM) I'm surprised that the directory where the project file is located should work but if it does that is some special dealing LabVIEW is doing to inform .Net of additional paths. The default and first search location of .Net for assemblies is however in the current executables directory. This is NOT where your VIs are. This is where the executable is located that created the current process. For the LabVIEW development system this would be the directory where your LabVIEW.exe is located. For a built app this is where your myapp.exe is located. Try this out to see if it would help with the current .Net DLLs. Your Installer may put the .Net DLLs in the Global Assembly Cache (GAC). This is the second location searched by .Net for any .Net DLL if the first fails. But in order to be able to install .Net DLLs into the GAC they need to be strongly named (meaning they have a fully defined version resource and all). These two locations (the executable directory and the GAC) are the only locations .Net will normally look for required DLLs. LabVIEW may do some extra magic to tell .Net to consider the project directory too, but this is in fact something that MSDN does advise to not do , because it is an extra possibility to open the gates to DLL hell again. Rolf Kalbermatter
  5. QUOTE (ragu @ Feb 3 2009, 12:10 AM) This is not a simple topic. The right answers for you will depend on how much you already know about LabVIEW, C programming, OS details etc. Also the information is out there. There are several good Knowledge Base or Application Note articles on www.ni.com that have lots of details in these respects. Some of them go to details you will only want to know if you have a really sound understanding of how memory management is done in C programs. The WIKI here on LAVA has also some good information which is not as in depth but should be good for an average LabVIEW programmer. Go to events like LabVIEW days, or User Group Meetings. There are often presentation about advanced topics with LabVIEW such as how to make a good performing application versus a bad one which is also dependent on some basic understanding about LabVIEW memory management. Go and search for this info and come back when you have more specific questions. We can not give you a 10 hour lecture about this topic here and it would not give you all the information that is possible anyhow. Rolf Kalbermatter
  6. QUOTE (Mark Yedinak @ Jan 21 2009, 11:36 AM) If you are talking about the MS RPC protocol you may want to think again about that approach. This is not just a simple protocol but the full blown DCE RPC specification with MS proprietary extensions. The various network protocol layers that this all can be embedded in each other, and the various security providers that the different layers can use including several encryption layers will likely make this a never ending project to be implemented in LabVIEW. As Mark has explained RPC is just a term used for anything that can allow remote execution of code through some kind of network connection. In that sense even VI Server is an RPC protocol. So you really need to find out the dialect of RPC you are talking about. Rolf Kalbermatter
  7. QUOTE (bsvingen @ Jan 18 2009, 04:55 PM) The real reason why you need to compile the wrapper in C++ is that only C++ can resolve the C++ typical things about how to call object methods and access public object variables. A class is in fact a sort of structure but how these things are layed out in memory and organized is something very C++ specific a C compiler can't deal in easily. There are exceptions to this such as COM, where even the binary layout of the C++ object is strictly defined so that given proper C accessor macros you can actually access a COM object both from C and C++, but that is a specific implementation of object classes in an attempt to make them compiler independent. The problem of mutations of the C++ object and the change in possible methods is solved there with the introduction of interface identifiers one has to query with the IUnknown interface and once you have released an interface you can not ever make changes to it but instead need to define a new interface and make that available too through the IUnknown QueryInterface method. For normally defined C++ object classes there is no real standard at all how the objects have to be organized in memory and how various things like virtual methods, inheritance and such are implemented on a binary level. This also makes it very difficult to call C++ link libs from a different compiler than the one who created them. I usually just define a opaque pointer in the wrapper, either void * or something like: struct <name>_t; typedef struct <name>_t *<name> The second does make it a bit more clear in the according function prototype what is meant to happen and requires an explicit cast in the call of any object method or public variable access but I wouldn't see whyyou would have to do any modifications on the original C++ source code at all. As far as LabVIEWs Call library Node is concerned those object pointers are just an uInt32 for me and in LabVIEW 8.6 you would even use the Unsigned Pointer-sized Integer, which will take care to keep your Call Library Node correct if you ever happen to go to a LabVIEW for 64 Bits OS and port your C++ library and wrapper to 64 bit too. Rolf Kalbermatter
  8. QUOTE (Dave Graybeal @ Jan 16 2009, 03:39 PM) Someone did bork up LabVIEW 8.6 mouse click position detection in various cases very badly. Seems a bit similar to the problem when you do have a tab control and try to place a free label on it. The location where it will be dropped is usually very different than where you clicked. I found the actual offset is in fact dependant on the distance of the top left corner of the tab control to the 0/0 point of the front panel/pane. And I recently read about another similar mouse click offset problem somewhere. Rolf Kalbermatter
  9. QUOTE (Poom-Thailand @ Jan 17 2009, 05:05 AM) If you have installed the Vision Development Toolkit from the LabVIEW DVD go into Add/Remove Programs Control Panel and do a repair of your install. If you haven't yet done so then install that package. Rolf Kalbermatter
  10. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  11. QUOTE (sachsm @ Jan 14 2009, 07:44 PM) I did look at the VI Server remote protocol some time ago (around LabVIEW 7) to the point were I had an almost working LabVIEW VI library communicating to a VI server and that is what I found: 1) All input terminal data is packed into a single packet. Same about the output terminal data that is sent back. 2) Some header, with data length information about the packet and a packet type identifier and then the data mostly packed using the LabVIEW flattened stream format. 3) No real idea. I would guess that the VI server was at least back then running in the UI thread. It may not be anymore but that has all kinds of implications with data handling towards the rest of a VI diagram. 4) client requests some action using a single packet where all necessary data is included and server answers with another packet with all relevant data in. So it is basically one packet per VI server transaction in each direction, with VI server transactions being Open VI Reference, Read/Write Attributes, Execute Method, Call By Reference, and Close Reference. So I think the VI Server protocol is about as efficient as it can be. The only problem I had with using VI server on embedded systems was with an old Fieldpoint Network Module recently. Had hoped to take a shortcut by simply enabling the VI server protocol on it and just execute VIs remotely to interact with the controller from the host program. However that did kill the performance of the FP controller so much that even if I did not actively interact with the VI server my main control program on there was starting to lag badly. Solution was to implement some extra methods in the already existing TCP/IP server that I had programmed in that application for reading and writing data points from and to the tag engine on it. After that and disabling VI Server the application on the controller behaved fine. And before you ask: No the VI Library I have worked on is not available for review. Rolf Kalbermatter
  12. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  13. QUOTE (jdunham @ Jan 14 2009, 11:21 AM) Actually you can limit both the VIs that will be accessible over remote VI server connections as well as the IP addresses that can connect to the server. The syntax for the keys is not documented nor easily described in full, but configuring everything in the LabVIEW options and copying the keys over to a built applications ini file is best. QUOTE I don't think LV scripting will work through this connection. I don't think you could use this connection to build new VIs on the target system which could do very bad things like list or read or delete the contents of your hard drive without specific advance knowledge of VIs existing on the remote system. (That's an interesting discussion on its own, especially if I am wrong). There are a lot of VI server operations (including almost all scripting) that is not even present in a built application. Many other methods and properties are also not accessible at all. And with above settings you can tighten it up so much that there is really not much an attacker can do other then sending it malformed packages to try to make it crash. Rolf Kalbermatter
  14. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  15. QUOTE (Dan DeFriese @ Jan 14 2009, 03:02 PM) In LabVIEW 8.5 you can configure the Call Library node to get a path that receives the DLL to call at runtime. Before LabVIEW 8.5 you have indeed to encapsulate all calls to DLLs that might be not available into VIs that get called dynamically through VI server. You can load the VI with an Open Instrument Node and either check its execution state or directly try to call it through the Call By Reference Node. Call by Reference will return an error if the VI was not executable (which it will not be if it could not resolve a DLL call). Rolf Kalbermatter
  16. QUOTE (shoneill @ Jan 14 2009, 04:55 AM) I think he is either amusing himself very much here or completely beyond any help. Rolf Kalbermatter
  17. QUOTE (Antoine Châlons @ Jan 14 2009, 03:31 AM) Hmm didn't encounter that so far. I would expect the upgrade process to take care of that. It does with just about anything else with the immensely reworked File IO functions. Writing new code is of course another beast if you are used to the old default value Rolf Kalbermatter
  18. QUOTE (OlivierL @ Jan 13 2009, 12:16 PM) Build array can not cause a memory leak in the true sense of the word. A memory leak means really that memory got allocated and the reference got lost somehow without the memory being freed. The unlimited Build Array function is really just a memory hog meaning it will accumulate memory over and over without ever releasing it. It is not a memory leak in the sense that LabVIEW very well knows about it and will eventually release it if you unload the VI containing that huge accumulated array. QUOTE (jdunham @ Jan 8 2009, 05:52 PM) No, there are plenty of things the profiler won't report. I don't believe that shows global memory like queues and notifiers or stuff allocated by ActiveX nodes that you've invoked. Not sure about queues really. It might not track them. For memory allocated in external components like ActiveX servers or even DLLs the Profiler has no way of tracking that down really as it is not allocated under its control. Rolf Kalbermatter
  19. QUOTE (Variant @ Jan 13 2009, 06:38 AM) Can you be a bit more clear with your question? I for one still wonder what you meant by this sentence. Rolf Kalbermatter
  20. QUOTE (Maca @ Jan 13 2009, 01:43 AM) They would be violating copyright and their license if they did. And if he has an older PID Toolkit it should still work in 8.5. It used to be simply a collection of VIs in older versions with diagrams intact so 8.5 can read it without problem. Rolf Kalbermatter
  21. QUOTE (jgcode @ Jan 9 2009, 03:02 AM) It's a Beta after all! File a bug report at MS QUOTE (Jim Kring @ Jan 12 2009, 09:37 PM) I wonder if Vista will just be like Windows ME and we can all just agree to forget about it, eventually. I think that is what will happen eventually. However I don't think they can avoid things like UAC in Windows 7 but probably implement it a bit saner so that you won't have to click several times through a password dialog to simply install a software. Windows 7 seems to be supposed to be the rigorously redesigned OS that Vista should have been, before they found out that doing that is going to take so long that they won't have a successor to XP for maybe 8 years but needed to get something out sooner as they were already way behind the 3 year release schedule for a new Windows version. Rolf Kalbermatter
  22. QUOTE (dmpizzle @ Jan 9 2009, 04:28 PM) The LabVIEW Text Write function node has a mode selection (right click pop-up menu) called "Convert EOL". When activated the function will convert each instance of LF, CR, or LF/CR into the end of line identifier for the current platform. Each platform has its own native EOL with Mac using a CR, Unix a LF and Windows CR/LF. And I think that LabVIEW for Mac OS X still uses the CR as it is really a Carbon application despite the underlying Unix kernel. When you want to write a text with specific EOL you have to format it accordingly and disable above mentioned option in the Write Text function. Rolf Kalbermatter
  23. QUOTE (Antoine Châlons @ Jan 10 2009, 06:12 AM) Option 3 was somewhere documented in old LabVIEW days (LabVIEW 4 or 5) but it seems it got somehow lost since, although it still works. Rolf Kalbermatter
  24. QUOTE (mattdl68 @ Jan 8 2009, 05:39 PM) This is in the installed online help of LabVIEW for quite some time at least since 8.0 although with some small modifications in each version to adapt to changes and new features of the Call Library Node configuration. LabVIEW Help Menu->Search the LabVIEW Help: Index Tab: enter call library This will eventually get you to a topic under: Fundamentals->Calling Code Written in Text Based Languages->Concepts->Calling Shared Libraries (DLLs) It is about the meaning of the options in the configuration of the Call Library Node. But it won't help you with your previous question, how to find out in which order and with what parameters to call a particular API. Of course it is actually about calling an API that is typically written in C and the whole shared library concept is based on C too, so understanding at least the basic principles of C datatypes and their handling is really a prerequisite to be able to even understand what all those options and things mean. But that is not LabVIEW's fault, other than that it does not require you to know much about those things for programming some simple LabVIEW only VIs. For complexer programming it is even in LabVIEW more or less doomed, if you do not have some good basic programming knowledge, although the more C specific things you don't really need to know unless you want to venture into the Call Library Node or God forbid the Code Interface Node. Rolf Kalbermatter
  25. QUOTE (Irene_he @ Jan 8 2009, 12:07 AM) No I think you are right. Except the person knowing 20% does not know that percentage . That person likely thinks to know almost anything he or she would need to do the task only to find out later that there was about 4 to 5 times as much to learn before they could actually succeed. The problem is that some people get discouraged half way through it and then abandon the idea. I have some of those projects too. Another quite common thing is that the last 10% of the work require 90% of the time. That is where most people stop. Rolf Kalbermatter QUOTE (mattdl68 @ Jan 7 2009, 08:44 PM) Thanks rolfk the link above will give you some history to my nightmare.......lol. What is confusing to me,is how to find the order of the dll's and functions to get what I need. There has to be some documentation out there that would give you some idea of the order in which to call functions.........one would think....lol I would think when trying to find USB devices using C++ they would need the order as well. Of course! And except from examples in the SDKs (and DDKs) and sometimes from Open Source projects you can investigate how certain things are done, that is where the creative art of programming starts. With trial and error, combinatorial logic, experience with certain types of APIs (if you have programmed WINAPI application, writing a MacOS X application can seem a really unlogical way of doing things and vice versa) and some sort of magic (I have often tinkered one or more days about how to get something to work, only to wake up one morning and having suddenly an idea that turned out to be the start of a solution), you go step by step about programming something. Programming in LabVIEW itself is not that much different but its on a much higher and more abstract level than tinkering with system APIs. And there is really no magic LabVIEW could employ to make working on system API level as easy as working on LabVIEW diagram level besides of having people like the NI elves and other third party developers write impressive intermediate level software such as DAQmx to make the work for the LabVIEW programmer more bearable. It's about the way your brain cells work and have been trained. On the same level, I've seen several brilliant system programmers cringe and falter at an attempt to produce a more than trivial LabVIEW application. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.