Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    244

Everything posted by Rolf Kalbermatter

  1. QUOTE (Klompmans @ Jan 29 2009, 09:05 AM) Personally I would write an application that controls the device and also contains a TCP/IP server that can accept connections and act on specific commands on these. Is it more work than using shared variables? Maybe. Is it more flexible? for sure. Does it ease distribution of your application because you do not have to deploy the shared variable engine and the definition? You bet! Rolf Kalbermatter
  2. QUOTE (jdunham @ Feb 13 2009, 05:01 PM) The fact that there is no typedef for error clusters wouldn't necessarily preclude changing that. They already have special case handling that allows to connect an error cluster to a Selector Node or a case structure Selector, as well as enabling the popup menu about explaining the error. This is all done on the recognition of the specific type definition (as in the type signature not what you understand as typedef on LabVIEW FP level) an error cluster has. But the issues are very complicated. You could create an error structure that allows structured exception handling, error classes and according inheritance and what else you can think of and create a monster in handling it, performance and memory footprint. And typedef or not the real difficulty is about upgrading existing code that comes not from NI. A typedef wouldn't help at all as it still would break tons of code that was crunched together by someone who had no idea about using even the limited facilities of the error cluster in a useful way. Rolf Kalbermatter
  3. QUOTE (Vende @ Feb 12 2009, 04:47 AM) Well you usually get what you pay for. So you want free and easy, if possible without any work from you? And of course bug free and perfect too? 'QUOTE (normandinf @ Feb 12 2009, 10:42 PM) In my experience, I always saw the pixelink cameras in MAX, I just needed to change drivers. Have you installed IMAQ? I'm not sure you can see the cameras in MAX without IMAQ installed. I'm pretty sure you can't see it without at least NI-IMAQdx and the Firewire support installed (which is all part of the license required Vision Acquisition Software). So no it is not free, but yes I would call it very easy. Rolf Kalbermatter
  4. QUOTE (austin316 @ Feb 16 2009, 02:08 AM) If the host is using adressed UDP datagrams in contrast to broadcasts, only the adressee will receive the datagram. What destination address do you see in Wireshark? Note that Wireshark uses promiscious mode to receive any datagrams and TCP packets on the network interface. This is a mode that can not be used by a regular sockets as used by LabVIEW and just about any other network software but only by tapping directly into the card driver through a filter driver. In Wireshark they use the WinPcap driver for this. Rolf Kalbermatter
  5. QUOTE (Dan DeFriese @ Feb 14 2009, 12:48 AM) Wonder which version that was?? :-) UDP VIs used to be VIs implemented in LabVIEW calling into C code ages ago. And yes there was no seperate reference for UDP communication. But this must have been like pre 6.x! Just checked and 5.1 had those VI based UDP functions and there was no separate TCP and UDP refnum type. So either the OPs were working in 5.1 or earlier (stll a very old version for 2004 standards) or had been bitten by either a bug in the upgrade process when upgrading an old VI to a recent version or cross linked themselves something. Rolf Kalbermatter
  6. QUOTE (Aristos Queue @ Feb 15 2009, 03:50 PM) Most likely he means an OLECHAR or BSTR string. It is basically a memory buffer that has a 32 bit character length in front followed by the actual characters (but as widechars, UTF16) but the pointer points to the first character hiding the length effectively. The problem is not the zero character at the end, in fact this layout was probably devised so that a BSTR could be still interpreted as C widechar string if one makes sure to zero terminate it. I have posted an unicode lllb on the NI forums ages ago that can also create and deal with BSTR and add it here for completeness. LabVIEW version >= 6.1 http://lavag.org/old_files/post-349-1234772122.llb'>Download File:post-349-1234772122.llb Rolf Kalbermatter
  7. QUOTE (Ton @ Feb 13 2009, 05:33 PM) To the original poster: A string (or array) can't really be passed by value since they are always pointers. To Ton: You are certainly right. Only for strings (or array) it is actually even easier than that. No new allocation of memory or copying of data is really necessary since LabVIEW can simply pass the pointer to the data portion inside its handle to the DLL. For strings there is however one extra step. Before passing the pointer, LabVIEW resizes the string handle (but does not change its internal character count) to make sure it is one byte longer than the actual string data and then sets this extra byte to 0. Now the data portion of of the LabVIEW string handle can also be interpreted as zero terminated C string. On return from the C function, LabVIEW will scan the returned string buffer for a zero character up to the passed data length and then set the character count of the string handle to only contain the characters up to and excluding the zero character. Rolf Kalbermatter
  8. QUOTE (menghuihantang @ Feb 13 2009, 01:43 PM) LabVIEW internally does a lot of different handling. While data is in general of course always some memory location the term by reference or by value does not apply to the diagram or front panel at all since LabVIEW uses strict data flow programming. But the Call Library Node does have a configuration dialog for a reason. Here you tell LabVIEW how the DLL function expects the data and LabVIEW will take care to pass the parameters accordingly indpendant how it uses them internally itself. If LabVIEW would simply pass data in whatever format it uses internally you would not be able to call most C functions. Rolf Kalbermatter
  9. QUOTE (menghuihantang @ Feb 12 2009, 05:00 PM) Well didn't know that var would mean pass by pointer. Normally pass by pointer variables have the addition ByRef in VB. var means this in Pascal but maybe VB added that keyword to ease migration for Pascal users. Rolf Kalbermatter
  10. QUOTE (Aristos Queue @ Feb 11 2009, 06:33 PM) He doesn't show the function configuration in the dialog but I strongly suspect a wrong calling convention. This will usually crash no matter what unless sometimes with functions that have no parameters at all. I agree that the parameters look right and seeing them being just integers it is hard to see how there could be anything wrong with them. The configuration of the return value never will crash unless you tell LabVIEW it is a string while it is either void or Numeric. Rolf Kalbermatter
  11. QUOTE (nitulandia @ Feb 11 2009, 03:14 PM) Indeed. It is part of the strong name of .Net DLLs too. In order to be able to install a new version of a .Net assembly in the GAC one has to use an increasing version number. However the constructor node is strictly connected to the strong name of the assembly if available. So if you install a new version replacing the older one you have to reconnect the constructor node. This is a pita but not easy to circumvent and if one would try to circumvent it one would cause other problems when more than one version of a .Net component are installed on the same machine. I'm not aware of a good workaround for this. Placing a .Net DLL (and its dependencies) into the application directory could litigate the issue a bit at the cost of saddling you with the distribution of the .Net assemebly with your application. Rolf Kalbermatter
  12. QUOTE (Neville D @ Feb 5 2009, 01:27 PM) VI Server on RT targets can have bad side effects. I enabled it recently on a CompactFieldpoint 2020 controller (one of the slowest there is I believe) and the whole application started to behave erratically since the processor load got so high that it could not keep up with running my normal VIs. And I had only planned to execute a VI or two remotely through VI server to send and retrieve some data. Turning VI server off and integrating that data transfer into my already existing LabVIEW TCP/IP server protocol on the controller made everything go smooth again. So if your CPU is not very powerful VI server certainly can cause problems and your timeout errors might be the reason for that. This is probably not gonna work since it would require building a new executable after every modification by the client but there is a VI library on the NI site called "System Replication" that allows downloading an executable and enabling it to be run from the host. I'm currently still experimenting wth this but it seems to work as I had hoped. Rolf Kalbermatter
  13. QUOTE (sachsm @ Jan 21 2009, 11:00 AM) Try to save it to a version before 8.5 and you get an error that it did not exist in that version. And yes it is definitely part of the LabVIEW DSC add-on. Rolf Kalbermatter
  14. QUOTE (Peeker @ Feb 5 2009, 06:21 AM) That is correct. SystemExec launches a process and nothing more. If you want to have DOS box behaviour you need to launch the DOS command prompt and pass the commands you want it to execute as command line parameters. Rolf Kalbermatter
  15. QUOTE (Tomi Maila @ Feb 14 2007, 05:20 PM) I use a technique I got from my collegue who delveloped LuaVIEW. It is simple VI that sits in the root of all my project directories and on startup scans the entire subdirectory hierachy for folders having the name "unit tests". Any VI in there having a predefined connector pane that returns a boolean will be loaded and presented in a list. The UI allows to then run all tests or individual ones and the result of the run is shown. The UI also has options to directly open the front panel so that when a test indicates failure I can go and directly debug it. Very simple and trivial but quite effective. The only drawback is having to write the actual unit tests , but that is a problem every unit test framework has. Rolf Kalbermatter
  16. QUOTE (Aristos Queue @ Jan 29 2009, 07:01 PM) While I'm not easily tending to violent actions I sure would scream hell and fire if someone did this change. The hypothetical chance to unlock someone elses VI because it uses the same password as one I know for another VI is no justification to make the password a complete useless feature . Also VIs would then need to remember that they were unlocked as otherwise you would have to reenter the password everytime after closing the diagram. OMG!!! Rolf Kalbermatter
  17. QUOTE (mesmith @ Feb 5 2009, 02:43 PM) The chip is not everything. You can easily create a GPIB driver that works reasonably on an 8 Bit CPU. Getting a TCP/IP stack to run properly on such resource constrained hardware is quite a different challenge. That said the embedded controller nowadays even for very simple devices get to a level that has more processing power than a complete mainframe from 30 years ago and is quite likely a 32 bit CPU core too. Also the knowledge about how to implement TCP/IP (open source implementations all over) is probably getting to a level were it is easier to dig into that than getting the somehow archane and not very easy to tackle task to control a GPIB controller to work. But that is certainly a recent development. Even a few years back adding a TCP/IP connection to a device was certainly more expensive than doing that with a GPIB interface even for manufacturers that had already quite some experience with TCP/IP interfaces in their devices. Rolf Kalbermatter
  18. QUOTE (jfazekas @ Jan 26 2009, 02:24 PM) Not sure about LV Class but a typedef in itself won't help. What you should try to do is passing your array in and out of VIs. Avoid branching as much as possible unless you branch of inside a structure to some non-reusing LabVIEW internal nodes such as Index Array, Array Size and similar. Bascially you should try to have the array as one wire going through your entire application. If you need to create a branch make sure it is in the same structure as the function that consumes the branch. You might branch to determine the size of the array but if you do that outside of the structre while the Array Size VI is inside a structure LabVIEW will likely create a copy. If you have loops to operate on the array create a shift register and wire the array to the left terminal wiring it from that terminal to the inside of the loop and making sure to wire it inside the loop back to the right terminal. When the loop finishes you just get the array from the right terminal and go to the next function. If you do this right LabVIEW will usually already avoid data copies even without using the Inplace Structure. In fact the Inplace Structure does not so much optimize the LabVIEW access (it does some extra optimizations) as much more enforce this type of wiring more strictly. With these techniques I have created VI libraries operating on huge multi MByte Arrays in speeds comparable what fairly optimized C algorithmes could perform even before the Inplace functions existed. Rolf Kalbermatter
  19. QUOTE (pallen @ Jan 26 2009, 01:43 PM) Most likely a graphics driver issue. LabVIEW does direct X Windows drawing and depending on the graphics driver used this might cause such issues. Try experimenting with the graphics driver settings, such as color depth, acceleration and such. Rolf Kalbermatter
  20. Sometimes see it too. Usually a recompile (Ctrl - run button) of the VI fixes it. But I'm not using LVGOOP so this might be another source of this problem that might not be fixed with a recompile. Rolf Kalbermatter
  21. QUOTE (MJE @ Feb 3 2009, 11:43 PM) They would get lynched by even more folks for "dictating" the font they have to work with, even if it would be configurable and just a default setting. QUOTE (jdunham @ Feb 4 2009, 12:09 PM) Yeah, I agree. When we build our application, we make sure those fonts are in the application's "labview.ini" file, because everything looks wretched otherwise. Forget about any kind of cross-platform GUIs. It sure would have been nice for NI to have dealt with this a bit better, though I know fonts have always been a pain for them. Not just for them. Fonts are a pain whenever you have to deal with them in any software. It's already bad when you just need to make font metrics work but gets impossible if you need to allow changing them. I rather have them use their time on something useful than trying to fix something impossible. Rolf Kalbermatter
  22. QUOTE (ejensen @ Feb 4 2009, 02:09 PM) The application builder stumbles over something that it does not expect since it usually doesn't happen but is caused by the workaround you have employed. Most likely it is the VI library code used in the Librarian VIs to deal with LLBs. That code contains some specific file ending checks that will fail on files with a DLL extension causing the following code to work badly when the file already exists. So the application builder will need to be fixed to workaround a bug caused by another workaround Rolf Kalbermatter
  23. QUOTE (ACS @ Jan 26 2009, 05:59 PM) I'm pretty sure that the LabVIEW runtime has absolutely no way of building a target of any form, PDA or not. In fact any LabVIEW runtime will not have that ability. That is something that requires LabVIEW development environment features that can't just be executed in the runtime system. There is no runtime system in the world that I would know of that could build itself or something similar out of the box. You do need the according development toolchain for that. Rolf Kalbermatter
  24. QUOTE (geoff @ Jan 28 2009, 03:36 PM) LabVIEW realtime is running on either the Pharlap ETS or VxWorks for PPC. VxWorks for PPC is out of question since NI does not support using that on non NI hardware and your system will most likely be an x86 based CPU. So in theory you could use the Pharlap (now I think called Ardence) ETS system on your hardware. In practice this is quite difficult though. Pharlap ETS as used by LabVIEW RT has specific requirements to the employed hardware such as supporting only certain chipsets and especially ethernet controllers. So you will have to really confirm with some NI specialist that your intended hardware (don't expect them to specify a PC-104 system for you as they do rather sell their own hardware) will be compatible in all aspects. Expect to be able to tell them exactly about chipsets and low level details of your system. Telling them just that you have a PC104 system xyz from vendor abc will not help as they will not be going to spend much time to try to find out all those low level details themselves. There is also a thorough list of specs somewhere on the NI site of what a hardware platform must consist of to be able to install and use Pharlap ETS on it. Be aware that since you are not using NI hardware you will also need to purchase the Pharlap ETS runtime license, that comes included with any NI hardware. If you have been gone through all this, confirmed that it will run and installed everything the next challenge will be the inclusion of your analog and digital IO. I can understand that a vendor does not feel about supporting LabVIEW RT very much since the potential volume especially for PC104 hardware is very small and the effort not. Writing your own drivers even when using just inport and outport will be a true challenge and since you would be using it with inport and outport you should not expect high speed data acquisition of any sorts. For reading and writing digital IO and single analog values it will work but forget about timed data acquisition. For that you need real drivers in the form of DLLs that can run on the Pharlap ETS system. And if you get that DLL you will need to get also a stub DLL for Windows exporting the same functions that do nothing in order to be able to develop on your host system the VIs that call that DLL. All in all this might be an interesting project to do if you have lots of time and/or the potential money saving by using this hardware instead of NI hardware pays off because you are going to deploy this system many thousand times. But even then you should check out NI hardware because if you are going to talk about such numbers they will be happy to come up with quite competitive offers. Rolf Kalbermatter
  25. QUOTE (nitulandia @ Feb 2 2009, 04:02 PM) I'm surprised that the directory where the project file is located should work but if it does that is some special dealing LabVIEW is doing to inform .Net of additional paths. The default and first search location of .Net for assemblies is however in the current executables directory. This is NOT where your VIs are. This is where the executable is located that created the current process. For the LabVIEW development system this would be the directory where your LabVIEW.exe is located. For a built app this is where your myapp.exe is located. Try this out to see if it would help with the current .Net DLLs. Your Installer may put the .Net DLLs in the Global Assembly Cache (GAC). This is the second location searched by .Net for any .Net DLL if the first fails. But in order to be able to install .Net DLLs into the GAC they need to be strongly named (meaning they have a fully defined version resource and all). These two locations (the executable directory and the GAC) are the only locations .Net will normally look for required DLLs. LabVIEW may do some extra magic to tell .Net to consider the project directory too, but this is in fact something that MSDN does advise to not do , because it is an extra possibility to open the gates to DLL hell again. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.