Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,872
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. I'm afraid you can't. An LStrHandle is a special LabVIEW datatype and there exists no direct .Net equivalent for it. As you can see in extcode.h, it is a pointer to a pointer to an int32 followed by the actual (non-unicode) character string. As such it is not even directly a long Pascal string (although with a pointer to such a beast you could simulate it) but I doubt that .Net even would support that one. Rolf Kalbermatter
  2. A C compiler does just that. You use the sizeof() keyword for this and that will calculate the compile time size of any datatype including structures and use that value in the compiled code. LabVIEW obviously does not have C parsing capabilities so can't do anything for you with that header file. Rolf Kalbermatter
  3. LabVIEW scripting in LabVIEW 8 is put behind the license manager. There is currently no way to get LabVIEW scripting to work in LabVIEW 8 except for NI guys. Rolf Kalbermatter
  4. Embedded space problems!! The path to the files should be enclosed in double quotes. Now where is the problem? lvdiff? shell extension configuration?? Rolf Kalbermatter
  5. I'm not aware of videocap but instead avicap. If you mean that you are probably using the WebCam library from Pete Parente. Avicap is an old API from Windows 3.1 days that is based on Video for Windows (and a vfwwdm driver allows it to also access WDM drivers indirectly) but avicap has no provisions for really configuring the video device as there wasn't much to configure programmatically back in those days where it was common to have to set jumpers on the board for a lot of things. The only interface that allows real configuration programmatically is the DirectX/WDM based interface and your best bet is to use an ActiveX interface for this either buy it or roll your own. You might also want to look into Irene He's IVision library, but that is also not really free of charge although a little cheaper than IMAQ Vision. Rolf Kalbermatter
  6. I think you messed up your LabVIEW version. Up until LabVIEW 7.0 the lvanlys.dll was a self contained DLL you could easily move together with your labview files. In LabVIEW 7.1 they changed that library to use the Intel Math Kernel Library instead for the number crunching work. This is actually a quite useful move since the MKL is a recognized standard about how to do numerical analysis. However the MKL is not part of LabVIEW and comes with its own installer that does some registry settings so lvanlys.dll can find it. In order for MKL to be properly installed and for lvanlys.dll to find it you have to install it. You can do so by creating an installer in the Application Builder and going into the Installer Settings tab->Advanced... button. Make sure you enable "LabVIEW Runtime-Engine" and "Analyze VIs Support". Other things are usually not really necessary but that of course depends on your application. Now you can just run that installer on any computer and you should get with a proper installation of your LabVIEW executable AND the LabVIEW runtime AND the MKL. Rolf Kalbermatter
  7. You lost me. Where did you see that you get the size from somewhere? What are you trying to say? Rolf Kalbermatter
  8. Set those "Boardxx = 1" to "Boardxx = 0". This will disable NI-VISA from attempting to search for GPIB devices. Set "DisableAutoFind = 1" Set those "Interfacexx = 1" to "Interfacexx = 0". This will disable NI-VISA from attempting to search for HP/Agilent GPIB interfaces. I'm usually explicitedly not installling Tulip VISA since I have no plans on using HP GPIB interfaces anyhow. You could try to set "SynchronizeAllVxi11 = 0". That should speed up things too since VISA won't even attempt to try to look for VXI11 devices on the TCP/IP bus. You can write a VI that uses the Config File functions to explicitedly set those config settings. Attached VI does disable all GPIB interfaces, so you may need to modify it if you require a GPIB interface. USually it would be enough to only enable the first GPIB board (Board0), since you seldom will have more than one. Rolf Kalbermatter Download File:post-349-1138873852.vi
  9. USB really isn't something you should use when you want to rely on it that it runs for weeks uninterrupted. While the idea of USB is nice there are a myriad of things which can and often do go wrong. The first and most obvious problem are the devices itself. There are many USB chip solutions out there that have simply more or less grave bugs in the silicon. They are sometimes getting fixed in later versions of that chip, if the manufacturer actually cares enough to fix its silicon instead of just releasing yet another silicon design with new bugs in there. Then the device drivers of those devices, quite often they are more or less taken over entirely from the development kit of the silicon manufacturer despite of notices all over the place that the software is only provided as an example of how it works and shouldn't be used in production quality designs. Then a quite common problem too are the actual USB chip bridges in the PC themselves and the according drivers. Nowadays those PCs are designed in a matter of months and often the newest chips are used despite that they have sometimes bugs too and their drivers are not yet mature. Maybe it is possible to patch the problems later on with a driver update and maybe the bug is not really fixable in software, which with most PC manufacturers nowadyas means bad luck for the end user. Basically if you want to use USB for uninterrupted long term data acquisition you will have to evaluate both the data acquisition hardware as well as your PC platform very carefully before doing so. USB for life supporting devices is definitely something you should never attempt to do unless you fully control the entire chain from controller software and hardware to the device software and hardware. Rolf Kalbermatter
  10. Maybe some guys came out of a long winter sleep and still think things are done the way as in old DOS days, not wanting to understand that there are nowadays cheap ready to run hardware solutions that cost less than the material you would need to build your own (not to talk about the time you will need to build your own). Rolf Kalbermatter
  11. Not easily. Linux just as Windows does not allow applications to access hardware directly. In Windows you have to write a device driver that does run in the kernel context to do that for you and that is the same for Linux. For Windows there exist several solutions with according device drivers to access the IO adres range from an application level. For Linux there are also several possibilities but an according kernel driver has never made it into the official kernel sources and definitely never will make it into it, since this is a potential security risk. The only way to get that done is by looking for one of the various hacks for Linux port IO access on internet and put that yourself into your kernel. This obviously will require you to at least compile some stuff like a kernel module but more likely to compile your own kernel. Basically while it sometimes seems necessary to do direct port IO it is mostly a bad idea and a potential security risk for sure. Rolf Kalbermatter
  12. It is quite common in WinAPI functions to specify the size of structure parameters passed to a function. Most often this is done in the first member variable of the structure itself but as can be seen there are exceptions such as this were you pass it as a spearate parameter. Why would you do that??? Simple! For version compatibility. Newer implementations of such a function can add extra elements at the end of the structure. If an application that was compiled with an earlier header file calls this function, the function can check the size, possibly recognize the version this application expects but most importantly make sure it only returns the information the structure can hold, avoiding writing past the end of the memory area reserved by the caller for the structure. The string in the structure is NOT fixed size. Only the area reserved for the string in the cluster is fixed size. This is common practice to avoid having to reference any parameters after a string at variable offsets, depending on the size of the embedded strings. In fact it is the only way to declare a structure at compile time. If you use variable sized strings in a structure you can't declare that structure at compile time but instead need to parse it dynamically at runtime. The function will fill in a NULL terminated string in that area up to the reserved length minus the 0 termination character. So the NULL termination character still is needed for the caller to find how long the actual string is. This is the point of my first post. You do not want to return all 32 characters that are reserved in the structure, but instead go through those 32 characters and stop at the first occurrence of a NULL character, since that is the end of the string. It is also a good practice to not assume that the string will be NULL terminated so you will go in a while loop and stop looping when a NULL character is found OR when the length of the array has been consumed. Not strictly necessary but a good idea and this is really called defensive programming. Rolf Kalbermatter
  13. 1MB for a single VI is IMO a little at the large side but it can happen for more complex VIs. It is probably a good candidate to check for common routines that can be delegated into subVIs. The number of cases is not that important but there can be a "to much". I once inherited a program that implemented a robot sequencer. It was written in a single VI with one huge loop containing a sequence structure with one case structure in each sequence, operating all on the same global state variable and one or two case structures in there containing all the possible sequence steps and substeps of the process logic. Of course no shift registers used at all but just globals all over the place. This program was in LabVIEW 6.1, the main VI weighted in at around 8MB, the two huge case structures had more than 200 cases each and every single edit operation on the VI took several seconds or so on a medium speed PC for LabVIEW to verify its internal graphs for syntax checks. Breaking the VI made editing luckily faster. Completely rewriting the application was no real option since the whole sequence logic was nowhere documented other than in the diagram. I finally managed to modify the VI in such a way that the main sequence logic was broken into the UI handler logic in the main VI, three logical processes that the sequencer only could process exclusively anyhow put into their own subVIs, replacing much of the globals with shift registers and intelligent functional globals and some optimizations in the sequence steps itself, resulting in around 4 VIs with 1MB each, some extra helper subVIs, and no cases structure with more than 70 or so cases. The result was an application that could be again edited without any noticable delay, worked noticably smoother with less CPU load, and fixed a few bugs by the way. Besides having to select a specific case during development in the popup list when there are 100 or more cases is a major pain in the a** (and in earlier LabVIEW versions you would not be able to scroll in that list if it did not fit on the screen ;-) Rolf Kalbermatter
  14. Or maybe subpanels? Rolf Kalbermatter
  15. In fact 34 is to long. The MS SDK uses here a fixed length of 32 chars for this. There is therefore no way to return a string longer than 31 characters through this function. Looking at the Vi I do see a few more problems. First the device number is supposed to be a value between "0" and "number of devices - 1". Therefore the -1 node in the wire from the iteration terminal to the CLN seems wrong. Ok, it seems that MIDI_MAPPER (= -1) is also a valid value for this function so the whole idea about increasing the number of devices by one and decreasing the index inside the function by one to get also the MIDI_MAPPER device is a valid idea, although I would have made a comment about this in the diagram just for the future user that sees this code. Second the string embedded in the structure should only be 32 characters long (and not 35 as it is now) which is the value of MAXPNAMELEN. This is no problem here only, because the function does not try to interpret the values after the string. If it would it would read wrong data since everything is shifted by three bytes. Last but not least is the third parameter to the CLN wrong. This is the length of the structure in bytes and is definitely not 38 bytes but more something like 52. 38 is in fact not even long enough for the function to return the entire string, as the structure up to and including the entire string placeholder is already 40 bytes long. Rolf Kalbermatter
  16. Just like the polor plot your best bet would be to use the Picture Control. Alternatively if you have somewhere an ActiveX control that provides this interface you could go with that, but that will be then a Windows only solution and depending on your LabVIEW version may be more or less stable. Rolf Kalbermatter
  17. Traditional objects based on refnums do not have any meaning outside of the process that created them.So queues or notifiers or such won't work as they couldn't connect each to each other. Your options are as follows" 1) Use LabVIEW 8 shared variables 2) Use DataSocket in earlier LabVIEW versions 3) Use VI server 4) Use TCP/IP to write your own client/server communication 5) Use external files I'm not sure about 1), but 2) is something I wouldn't recommend for a number of reasons. It's performance is quite limited, it can be hard to get configured right, and it uses an undocumented protocol. 1) and 3) have the same issue about being undocumented but at least 3) does work very well and all the low level stuff is done for you. 4) is the hardest to implement and requires quite some experience to get a good working system that will also be able to deal with extensions in the future. But it is the most flexible and also most performant solution. 5) is really only a last ressort. I wouldn't recommend it at all as you get all sorts of problems with synchronizing access to the files between the two or more applications. Rolf Kalbermatter
  18. First an int in modern Windows is always 32 bit and not 16 as you have assumed for the Bits parameter of the first function. This probably will neck you as well in the third parameter since you need to provide enough memory for the function to write into and if you assumed 16bit for the values in there your buffer is likely calculated to small. Second while you have this time properly documented the type of AES_KEY you still missed to provide important information. Without knowing the value of AES_MAXNR one can NOT calculate the size of the buffer you need to allocate in LabVIEW for this parameter. Basically the needed size is 4 * (AES_MAXNR + 1) + 4 bytes. A single byte to small can either crash your system immediately or after some time or when you try to close your LabVIEW app but most importantly could also corrupt some data that is vital to LabVIEW or your VI and when saved to disc might corrupt your VI to the point where you have to start over. My advice is still to go to www.visecurity.com and get the CryptoG toolkit that has a ready to use AES encryption and decryption routine out of the box. Rolf Kalbermatter
  19. You do not want to disable unreadable characters but instead go in a while loop and abort at the first occurrence of a NULL character and then resize the byte array to this length - 1 and then put it through a Byte Array To String node. The string in there is a zero terminated string, as all C strings are so and once you encounter \00 it is over and the rest is only garbage that happened to be in memory before the function filled in the information. Rolf Kalbermatter
  20. I'm not 100% sure but I thought the earliest version that had Save with Options->Save for previous version was 5.1. So there would be a problem to go back to 4 with this feature. On the other hand the only reason not to upgrade to a newer version would be that your driver is in compiled form without diagram. In that case and I'm sorry to say it, but you are in deep shit. Rolf Kalbermatter
  21. If the data is in binary form already there is no need for any conversion. Just use a Typecast function to change the incloming string into an array of integers. Hopefully you are using Network byte order because that is what the Typecast function will assume at the incoming stream side. Of course you could write a C code function to receive the data and get it directly into LabVIEW as an integer array and if the data is in Little Endian format this would get rid of the inherent Byte and Word Swap in the Typecast function but if receiving the data already is the bottle neck, that C function most likely won't be much faster than what you have now when receiving that data as a string. After all 50MB is not peanuts for a PC system and typically will require several tens of seconds to be transmitted over a 100 Base TX connection and only slightly less when 1000 Base TX is used, loading the system CPU as well considerably during this time.
  22. In FIrefox you simply click on the picture to toggle between the scaled version and the unscaled one. Rolf Kalbermatter
  23. It depends on the LabVIEW version. Before version 7 you had to close every VI server reference to avoid leaking memory. Since version 7 you only have to close VI server references that you explicitedly open with an Open function. But LabVIEW is forgiving if you try to close VI references you retrieved from property nodes for instance and recognizes that that are owned references and does nothing on them. So as a matter of fact I usually still use the Close Reference function on all VI server references independant if they are explicitedly or implicitedly opened VI references. Rolf Kalbermatter
  24. I haven't tried this but if it does work then of course only with Rosetta. Still there are a myriad of things that might go wrong. Forget probably about any DAQ or other hardware IO other than standard OS channels such as File IO, and hopefully TCP/IP and serial. Maybe I try to find one of those inofficial MacOSX86 development installations and see if it would run on my Sony notebook. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.