Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I wasn't aware of the Toshiba Teli product line. Googling "TeliCam Toshiba" didn't bring up any relevant links :-) and "TeliCam" alone only showed some analog cameras! Since it's indeed an entire range of cameras with all kinds of interfaces we definitely need to know more about the actually used model before we can say anything more specific about the best way to use that from withing LabVIEW.
  2. To add to what Jordan and Tomas already said, the camera is pretty unimportant here. Since it is an anaolog camera you need to have also some sort of image frame grabber interface that converts the analog signal to a digital computer image. This is what is important as to how you can interface to your camera. Unfortunately NI has discontinued all their analog frame grabber interfaces otherwise the most simple solution would have to be to buy an NI IMAQ device and connect your camera to that. Instead of that there are supposedly still some Alliance Members that sell third party analog frame grabbers with LabVIEW drivers. Other possible interfaces that claim to have LabVIEW support: http://www.theimagingsource.com/de_DE/products/grabbers/dfgmc4pcie/ http://www.bitflow.com/products/details/alta-an http://www.i-cubeinc.com/pdf/frame%20grabbers/TIS-DFGUSB2.pdf And as has been mentioned, if the frame grabber has a DirectX driver you should be able to access is from IMAQdx too, possibly with a little configuration effort.
  3. This appears to probably call libtiff and there the function TIFFGetField() for one of the string tags. This would return a pointer to a character string and as such can indeed not be directly configured in the LabVIEW Call Library Node. The libtiff documentation is not clear about if the returned memory should be deallocated explicitedly afterwards or if it is managed by libtiff and properly deallocated when you close the handle. Most likely if it doesn't mention it, the second is true, but definitely something to keep in mind or otherwise you might create deep memory leaks! As to the task of returning the string information in that pointer there are in fact many solutions to the problem. Attached VI snipped shows two of them. "LabVIEW Manager" calls a LabVIEW manager function very much like the ominous MoveBlock() function and has the advantage that it does not require any extra DLL than what is already present in the LabVIEW runtime itself. "Windows API" calls similar Windows API functions instead.
  4. The problem is that you are using reparse points (the Microsoft equivalent, or more precisely attempt to create an equivalent, of unix symbolic or hard links). And that reparse points where only really added with NTFS 3.0 (W2K) but Windows itself only really learned to work with them in XP sort of, and even in W7 support is pretty limited and far from getting recognized properly by many Windows components who aren't directly part of the Windows kernel. LabVIEW never has heard about them and treats them logically as whatever the reparse point is used for, namely either the file or directory they point at. LabVIEW for Linux and Mac OSX properly can deal with symbolic links (hard links are as far as applications are concerned anyhow fully transparent). On Windows LabVIEW does actually support shortcuts (the Win95 placebo for support of path redirection) but does not offer functionality to allow an application to have any influence on how LabVIEW deals with them. When you pass a path that contains one or more shortcuts to the File Open/Create or the File/Directory Info function, LabVIEW automatically will resolve every shortcut in the path and access the actual file or directory. But it will not attempt to do anyhing special for reparse points and doesn't really need to as that is done automatically by the Windows kernel when passing a path to it that contains reparse points. It only gets complicated when you want to do something in LabVIEW that needs to be aware of the real nature of those symbolic links/reparse points, such as the OpenG ZIP Library. And that is the point where I'm currently working on and it seems the only way to do that is by accessing the underlaying operating system functions directly, since LabVIEW abstracts to much away here. But it shouldn't be a problem for any normal LabVIEW application as long as you are mostly interested in the contents of files but not the underlaying hierarchy of the file system. Incidentially my tests with the List Folder function showed that LabVIEW properly places the elements that point to directories (reparse points, symbolic links and shortcuts) into the folder name list, while elements that point to files are placed into the filenames list. And that is true for LabVIEW versions as far back as 7.0. But there is an exported C function FListDir() which is even (poorly) documented in the external Code reference online documentation, that returns shortcuts as files but also returns an extra file types array which indicates them to be of type 'slnk' for softlink which was the MacOS signature for alias files. Supposedly List Folder uses internally FListDir() to do its work and properly process the returned information to place these softlinks into the right list. Unfortunately FListDir() doesn't know about reparse points, something quite understandable if you realize that even Windows 8 only has one single API to create a symbolic link. If one wants to create hard links or retrieve the actual redirection information of the symbolic or hard link, one has to call directly into kernel space with pretty sparsely documented information to do those things.
  5. While this might be a possible option and is done in other software components to get around the problem of changing authors for different components I do think it is made more complicated by the fact that there would have to be some sort of body that actually incorporates the "OpenG Group". For several open source projects that I know of and which use such a catch all copyright notice, there actually existst some registered non-profit organization under that name that can function as copyright holder umbrella for the project. Just making up an OpenG Group without some organizational entity may legally not be enough. Personally I would be fine with letting my copyright on OpenG components be represented by such an entity. Now, even if such an entity would exist there would be one serious problem at the moment. You can't just declare that any cody provided to OpenG in the past falls under this new regime. Every author would have to be contacted and would have to give his approval to be represented in such a way through the entity. Code from authors who wouldn't give permission or can't be contacted, would need to remain as is or get removed from the next distribution that uses this new copyright notice. And there would need to be some agreement that every new submitter would have to agree too, that any newly submitted code falls under this rule. All in all, it's doable, but quite a bit of work and I'm not sure the OpenG community is still active enough that anyone would really care enough to pick this up.
  6. That display is really a computer terminal. It was released in 1974, about 12 years before LabVIEW 1.0 was even invented/released. So I would guess that even LabVIEW 3.0 is very unlikely to have ever had any specific library for this thing. What are you trying to do there? Reading the display contents over the RS-232 optional interface or something? It probably would involve knowing exactly which type of interface you have installed in that thing and then using the according protocol. In this catalog on page 273 you can see that the interface was option selectable for many popular computers of that time, each of them with their own specific terminal data protocol. That was before DECs VT100 terminal set some sort of defacto terminal protocol standard. It might be nowadays pretty hard to come to some protocol definitions of some of those interfaces. Interesting to see that this thing cost almost 9000$ at its introduction excluding any interface options.
  7. No, LabVIEW for Linux is only x86 (and in 2014 also x64 compiled) meaning it will only run on Intel x86 compatible processors. The NI Linux RT version for their ARM targets (myRIO, cRIO 906x) could theoretically be made to work on this but not without some serious effort. Unfortunately it is not like you can just copy the image over, but you would rather have to download the source code for NI Linux RT distribution and adapt it to the hardware resources as available on this board and compile your own linux kernel image and libraries for this target. Even if that succeeds (which given enough determination would be possible) there is another problem: licensing! When you buy an NI RT hardware platform you also buy a LabVIEW runtime license. NI does want to get a license fee, if you plan to install the LabVEW RT runtime kernel (nirt.so and other stuff) on non-NI embedded hardware!
  8. Yes, when developing the LabPython interface which also has an option to use a script node. That is why I then added a VI interface to LabPython which made it possible to execute Python scripts that are determined at runtime rather than at compile time. However not sure how to do that for Mathscript.
  9. It's not an overzealous optimization but one that is made on purpose. I can't find the reference at the moment but there was a discussion of this in the past and some input from some LabVIEW developer about why they did that. I believe it's at least since LabVIEW 7.1 or so like that but possibly even earlier. And LabVIEW 2009 doing it differently would be a bug! Edit: I did a few tests in several LabVIEW versions with attached VIs and they all behaved consistently by resetting the value when the false case was executed. LV 7.0, 7.1.1f2, 8.0.1, 2009SP1 64 Bit, 2012SP1f5, 2013SP1f5 FP Counter.vi Test.vi
  10. It's not an upgrade code but a Product ID. Technically it is a GUID (globally unique identifier). It is virtually guaranteed to be different each time a new one is generated. This Product ID is stored inside the Build Spec for your executable. If you create a new Build Spec this product ID is each time newly generated. If you clone a Build Spec, the Product ID is cloned too. The Installer stores the Product ID in the registry and when installing a new product it will search for that Product ID and if it finds it it will consider the current install to be the same product. It then checks the version and if the new version is newer than the already installed version, it will proceed to install over the existing application. Otherwise it silently skips the installation of that product. Now, please forgive me but your approach of cloning Build Specs to make a new version is most likely useless. As you create a new version of your application you usually do that because you changed some functionality of your code. But even though your old build spec is retained, it still points to the VIs on disk as they are now, most likely having been overwritten by your last changes. So even if you go back and launch an older build spec, you most likely will build the new code (or maybe a mix of new and old code, which has even more interesting effects to debug) with the only change being that it claims to be an older version. The best way to maintain a history of older versions is to use proper source code control. That way you always use the same build spec for each version (with modifications as needed for your new version), but if you need to, you can always go back to an earlier version of your app. A poor mans solution is to copy the ENTIRE source tree including project file and what else for each new version. I did that before using proper source code control, also zipping the entire source tree up but while it is a solution, it is pretty wasteful and cumbersom. Here again, you don't create a new Build Spec for each version but rather keep the original Build Spec in your project.
  11. The Generate button generates a new Product ID. A product ID identifies a product and as long as the installer uses the same product ID it will update previous version with the same product ID. However the installer will NOT overwrite a product with the same product ID but a newer version. If you really want to install an older version on your machine over a new version, the easiest solution would be to completely deinstall your product and start your previous installer to install your application from scratch. Trying to trick the installer into installing your older version over a newer version is a pretty complicated and troublesome operation that will go more often wrong than right.
  12. Ok guys, I managed to organize a nice iMac in order to be able to compile and test a shared library of the OpenG ZIP Toolkit. However I have run into a small roadblock. I downloaded the LabVIEW 2014 for Mac Evaluation version and despite that it tells me that it is the 32 bit version, does it contain the 64 version of the import library in the cintools directory. Therefore I would like to ask if someone could send me the lvexports.a file from the cintools directory from an earlier LabVIEW for Mac (Intel) installation, so that I can compile and test the shared library in the Evaluation version on this iMac. I'm afraid that even a regular LabVIEW 2014 for Mac installation might contain a 64 bit library in both installation, so the lvexports.a file from around LabVIEW 2010 up to 2013 would be probably safer, as those versions were 32 bit only and therefore more likely will also contain a 32 bit library file
  13. And what references are you talking about here?
  14. I have created a new package with an updated version of the OpenG ZIP library. The VI interface should have remained the same with the previous versions. The bigger changes are under the hood. I updated the C code for the shared library to use the latest zlib sources version 1.2.8 and made a few other changes to the way the refnums are handled in order to support 64 bit targets. Another significant change is the added support for NI Realtime Targets. This was already sort of present for Pharlap and VxWorks targets but in this version all current NI Realtime targets should be supported. When the OpenG package is installed to a LabVIEW 32 bit for Windows installation, an additional setup program is started during the installation to copy the shared libraries for the different targets to the realtime image folder. This setup will normally cause a password prompt for an administrative account even if the current account already has local administrator rights, although in that case it may be just a prompt if you really want to allow the program to make changes to the system, without requiring a password. This setup program is only started when the target is a 32 bit LabVIEW installation since so far only 32 bit LabVIEW supports realtime development. After the installation has finished it should be possible to go in MAX to the actual target and select to install new software. Select the option "Custom software installation" and in the resulting utility find "OpenG ZIP Tools 4.1.0" and let it install the necessary shared library to your target. This is a prelimenary package and I have not been able to test everything. What should work: Development System: LabVIEW for Windows 32 bit and 64 Bit, LabVIEW for Linux 32 Bit and 64 Bit Realtime Target: NI Pharlap ETS, NI VxWorks and NI Linux Realtime targets From these I haven't been able to test the Linux 64 Bit at all, as well as the NI Pharlap and NI Linux RT for x86 (cRIO 903x) targets If you happen to install it on any of these systems I would be glad if you could report any success. If there are any problems I would like to hear them too. Todo: In a following version I want to try to add support for character translation of filenames and comments inside the archive if they contain other characters than the ASCII 7 bit characters. Currently characters outside that range are all getting messed up. Edit (4/10/2015): Replaced package with B2 revision which fixes a bug in the installation files for the cRIO-903x targets. oglib_lvzip-4.1.0-b2.ogp
  15. This can't be! The DLL knows nothing about if the caller provides a byte buffer or a uInt16 array buffer and conseqently can't interpret the pSize parameter differently. And as ned told you this is basically all C knowledge. There is nothing LabVIEW can do to make this even more easy. The DLL interface follows C rules and those are both very open (C is considered only slightly above assembly programming) and the C syntax is the absolute minimum to allow a C compiler to create legit code. It is and was never meant to describe all aspects of an API in more detailed way than what a C compiler needs to pass the bytes around correctly. How the parameters are formated and used is mostly left to the programmer using that API. In C you do that all the time, in LabVIEW you have to do it too, if you want to call DLL functions. LabVIEW uses normal C packing rules too. It just uses different default values than Visual C. While Visual C has a default alignment of 8 bytes, LabVIEW uses in the 32 bit Windows version always 1 byte alignment. This is legit on an x86 processor since a significant amount of extra transistors have been added to the operand fetch engine to make sure that unaligned operand accesses in memory don't invoke a huge performance penalty. This all to support the holy grail of backwards compatibility where even the greatest OctaCore CPU still must be able to execute original 8086 code. Other CPU architectures are less forgiving, with Sparc having been really bad if you would do unaligned operand access. However on all other current platforms than Windows 32 Bit, including the Windows 64 Bit version of LabVIEW, it does use the default alignment. Basically this means that if you have structures in C that are in code compiled with default alignment you need to adjust the offset of cluster elements to align on the natural element size when programming for LabVIEW 32 bit, by possibly adding filler bytes. Not really that magic. Of course a C programmer is free to add #pragma pack() statements in his source code to change the aligment for parts or all of his code, and/or change the default alignment of the compiler through a compiler option, throwing of your assumptions of Visual C 8 byte default alignment. This special default case for LabVIEW for Windows 32 Bit does make it a bit troublesome to interface to DLL functions that uses structure parameters if you want to make the code run on 32 Bit and 64 Bit LabVIEW equally. However so far I have always solved that by creating wrapper shared libraries anyhow and usually I also make sure that structures I use in the code are really platform independent by making sure that all elements in a structure are aligned explicitedly to their natural size.
  16. A proper API would specifically document that one can call the function with a NULL pointer as buffer to receive the necessary buffer size to call the function again. But that thing about that you have to specify the input buffer size in bytes but get back the returned characters would be a really brain damaged API. I would check again! What happens if you pass in the number of int16 (so half the number of bytes)? Does it truncate the output at that poistion? And you still should be able to define it as an int16 array. That way you don't need to decimate it afterwards.
  17. LabVIEW takes the specification you set in the Call Library Node pretty literally. For C strings it means that it will parse the string buffer on the right side of the node (if connected) for a 0 termination character and then convert this string into a LabVIEW string. For a Pascal string it interprets the first byte in the string as a length and then assumes that the rest of the buffer contains as much characters (although I would hope that it uses an upper bounding of the buffer size as it was passed in on the left side). Since your "String" contains embedded 0 bytes, you can not let LabVIEW treat it as a string but instead have to tell it to treat it as binary data. And a binary string is simply an array of bytes (or in this specific case possibly an array of uInt16) and since it is a C pointer you have to pass the array as an Array Data Pointer. You have to make sure to allocate the array to a size big enough for the function to fill in its thing (and probably pass in that size in pSize so the function knows how big the buffer is it can use) and on return resize the array buffer yourself to the size that is returned in pSize. And you have of course to make sure that you treat the pSize correctly. This is likely the number of characters so if this is an UTF16 string then it would be equal to the number of uInt16 elements in the array (if you use a byte array instead on the LabVIEW side the size in LabVIEW bytes would be likely double that of what the function considers as size). But note the likely above! Your DLL programmer is free to require a minimum buffer size on entry and ignore pSize altogether, or treat pSize as number of bytes, or even number of apples if he likes. This information must be documented in the function documentation in prosa text and can not be specified in the header file in any way. Last but not least you will need to convert the UTF16 characters to a LabVIEW MBCS string. If you have treated it as uInt16 array, you can basically scan the array for values that are higher than 127. These would need to be treated specially. If your array only contains values up to and including 127 you can simply convert them to an U8 byte and then convert the resulting byte array to a LabVIEW string. And yes values above 128 are not directly translatable to ASCII. There are special translation tables that can get pretty involved especially since they depend on your current ANSI codepage. The best would be to use the Windows API WideCharToMultiByte() but that is also not a very trivial API to invoke through the Call Library Node. On the dark side you can find some more information here about possible solutions to do this properly. The crashing is pretty normal. If you deal with the Call Library Node and tell LabVIEW to pass in a certain datatype or buffer and the underlaying DLL expects something else there is really nothing LabVIEW can do to protect you from memory corruption.
  18. It is quite unclear to me what you try to do here. Can you provide some VI code here and explain in more detail what your problem is? If you have a 2D array of uIn16 values I would think this to be your compressed image data. This can be saved as binary stream but of course would be only useful for your application that knows through which decompressing function to run it through to get the uncompressed data again. If you want to save it in a way so other applications can read it then you may be on the wrong track here. Generally while specific compression algorithms can be interesting to use, there is a lot more involved in making sure that such an image file is readable in other applications too. For that there have been specific compressable image formats such as JPEG, JPEG2K, PNG and TIFF, although TIFF is probably the worst of all in terms of interoperability as it allows in principle for an unlimited amount of image formats and compressions to be used, yet there is no application in the world which could read all possible TIFF variants.
  19. That very much depends on your C# executable. Is it a simple command line tool which you can run with a command line parameter such as "on" and "off" and will return immediately keeping your force feedback controller running through some other background task? If your executable needs to keep running in order for the force feedback controller to stay on you can not really use System Exec here. An other idea would be to interface to whatever API your C# application uses directly from LabVIEW. I would assume you use some .Net interface in that C# application to control your force feedback controller so there shouldn't be a big problem in accessing that API from LabVIEW directly and skipping your C# Form application altogether. It is indeed not a good idea to try to interface a .Net Form application to another application through either .Net calls or command line calls, as your Form application is an independent process that wouldn't lend itself to such a synchronous interface as .Net calls or command line calls would impose. If you consider the direct .Net interface to worrisome for some reasons (would be interesting to hear why you think so) another option might be to add an interprocess communication interface to your C# application such as a TCP/IP server interface through which external applications can send commands to your application to control the controller.
  20. I'm afraid there is no real way to get the current LabPython to work with Python 3.0 without a recompile (and as it seems modification of the shared library code to the new Python 3.x C API. It may be trivial as just missing the Py_ZeroStuct export but it could be also a lot more complicated.
  21. While an interesting project it looks pretty dead. No commits or any other activity in github since March 2012 after only about one year of real activity! An impressive amount of codebase but that is at the same time a problem for anyone else but the original creator to pick it up again. I don't necessarily consider Java to be bad even though it is not my preferred programming language. But things like plugging in LLVM for native code generation would be pretty hard that way.
  22. Duplicate post here. Refer to the answer there. BTW: the original topic is only related in respect to the PostLVUserEvent() function but explicitly was about a quite different and more practical data structure.
  23. Well, Excel is (mostly) Unicode throughout so it can simply display any language that can be represented in UTF-16 (the Unicode encoding scheme used by Windows). LabVIEW is NOT Unicode (aside from the unsupported option to enable UTF-8 support in it but that is an experimental feature with lots of difficulties that make a seamless operation very difficult). As such LabVIEW uses whatever MBCS (MultiByte Character Set) that your language setting in the International Control Panel defines. When LabVIEW invokes ActiveX methods in the Excel Automation Server the strings are automatiically translated from the Excel Unicode format to the LabVIEW MBCS format. But Unicode to MBCS can be lossy since no MBCS coding scheme other than UTF-8 (which can also be considered MBCS) can represent every Unicode character. But Windows doesn't allow to define UTF-8 to be set as system MBCS encoding unlike Linux. So if your Excel string contains characters that can not be translated to characters in the current MBCS of the system you get a problem. There is no simple solution to this problem, otherwise NI and many others would have done it long ago. Anything that can be thought out for this will in some ways have drawbacks elsewhere.
  24. Well, Scan from String is not meant to be used like that, just as the scanf() functions in C. They consume as many characters as match the format specifier and then stop. They only fail if no character matches the format specifier or if literal strings in the format string don't match. While I would usually use Match Pattern for such things, Scan From String can be used too. If you want to know if the whole string was consumed you just have to check that the offset past scan is equal to the string length just as Darin suggested or that the remaining string is empty.
  25. Traditionally all references needed to be closed. This changed around LabVIEW 7.1 for control references. But closing them anyways doesn't hurt as the Close Reference is simply a noop for them. A quicker way to check if you cause memory leaks by not closing a reference is to just typecast it into an int32 and execute it twice. If the numeric value doesn't chance it is a static refnum and doesn't need to be closed. But generally I never do that. The thumb of rule is simple, iIf it is a control reference it doesn't need to be closed, anything else needs closing. And closing a control reference that doesn't need to be closed doesn't hurt so err on the safe side. Ohh and as far as performance difference: You really can spend your valuable developer time much better than worrying about such sub microsecond optimizations. The difference is likely negligable, and definitely to small for me to worry about.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.