Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. Yes, when developing the LabPython interface which also has an option to use a script node. That is why I then added a VI interface to LabPython which made it possible to execute Python scripts that are determined at runtime rather than at compile time. However not sure how to do that for Mathscript.
  2. It's not an overzealous optimization but one that is made on purpose. I can't find the reference at the moment but there was a discussion of this in the past and some input from some LabVIEW developer about why they did that. I believe it's at least since LabVIEW 7.1 or so like that but possibly even earlier. And LabVIEW 2009 doing it differently would be a bug! Edit: I did a few tests in several LabVIEW versions with attached VIs and they all behaved consistently by resetting the value when the false case was executed. LV 7.0, 7.1.1f2, 8.0.1, 2009SP1 64 Bit, 2012SP1f5, 2013SP1f5 FP Counter.vi Test.vi
  3. It's not an upgrade code but a Product ID. Technically it is a GUID (globally unique identifier). It is virtually guaranteed to be different each time a new one is generated. This Product ID is stored inside the Build Spec for your executable. If you create a new Build Spec this product ID is each time newly generated. If you clone a Build Spec, the Product ID is cloned too. The Installer stores the Product ID in the registry and when installing a new product it will search for that Product ID and if it finds it it will consider the current install to be the same product. It then checks the version and if the new version is newer than the already installed version, it will proceed to install over the existing application. Otherwise it silently skips the installation of that product. Now, please forgive me but your approach of cloning Build Specs to make a new version is most likely useless. As you create a new version of your application you usually do that because you changed some functionality of your code. But even though your old build spec is retained, it still points to the VIs on disk as they are now, most likely having been overwritten by your last changes. So even if you go back and launch an older build spec, you most likely will build the new code (or maybe a mix of new and old code, which has even more interesting effects to debug) with the only change being that it claims to be an older version. The best way to maintain a history of older versions is to use proper source code control. That way you always use the same build spec for each version (with modifications as needed for your new version), but if you need to, you can always go back to an earlier version of your app. A poor mans solution is to copy the ENTIRE source tree including project file and what else for each new version. I did that before using proper source code control, also zipping the entire source tree up but while it is a solution, it is pretty wasteful and cumbersom. Here again, you don't create a new Build Spec for each version but rather keep the original Build Spec in your project.
  4. The Generate button generates a new Product ID. A product ID identifies a product and as long as the installer uses the same product ID it will update previous version with the same product ID. However the installer will NOT overwrite a product with the same product ID but a newer version. If you really want to install an older version on your machine over a new version, the easiest solution would be to completely deinstall your product and start your previous installer to install your application from scratch. Trying to trick the installer into installing your older version over a newer version is a pretty complicated and troublesome operation that will go more often wrong than right.
  5. Ok guys, I managed to organize a nice iMac in order to be able to compile and test a shared library of the OpenG ZIP Toolkit. However I have run into a small roadblock. I downloaded the LabVIEW 2014 for Mac Evaluation version and despite that it tells me that it is the 32 bit version, does it contain the 64 version of the import library in the cintools directory. Therefore I would like to ask if someone could send me the lvexports.a file from the cintools directory from an earlier LabVIEW for Mac (Intel) installation, so that I can compile and test the shared library in the Evaluation version on this iMac. I'm afraid that even a regular LabVIEW 2014 for Mac installation might contain a 64 bit library in both installation, so the lvexports.a file from around LabVIEW 2010 up to 2013 would be probably safer, as those versions were 32 bit only and therefore more likely will also contain a 32 bit library file
  6. I have created a new package with an updated version of the OpenG ZIP library. The VI interface should have remained the same with the previous versions. The bigger changes are under the hood. I updated the C code for the shared library to use the latest zlib sources version 1.2.8 and made a few other changes to the way the refnums are handled in order to support 64 bit targets. Another significant change is the added support for NI Realtime Targets. This was already sort of present for Pharlap and VxWorks targets but in this version all current NI Realtime targets should be supported. When the OpenG package is installed to a LabVIEW 32 bit for Windows installation, an additional setup program is started during the installation to copy the shared libraries for the different targets to the realtime image folder. This setup will normally cause a password prompt for an administrative account even if the current account already has local administrator rights, although in that case it may be just a prompt if you really want to allow the program to make changes to the system, without requiring a password. This setup program is only started when the target is a 32 bit LabVIEW installation since so far only 32 bit LabVIEW supports realtime development. After the installation has finished it should be possible to go in MAX to the actual target and select to install new software. Select the option "Custom software installation" and in the resulting utility find "OpenG ZIP Tools 4.1.0" and let it install the necessary shared library to your target. This is a prelimenary package and I have not been able to test everything. What should work: Development System: LabVIEW for Windows 32 bit and 64 Bit, LabVIEW for Linux 32 Bit and 64 Bit Realtime Target: NI Pharlap ETS, NI VxWorks and NI Linux Realtime targets From these I haven't been able to test the Linux 64 Bit at all, as well as the NI Pharlap and NI Linux RT for x86 (cRIO 903x) targets If you happen to install it on any of these systems I would be glad if you could report any success. If there are any problems I would like to hear them too. Todo: In a following version I want to try to add support for character translation of filenames and comments inside the archive if they contain other characters than the ASCII 7 bit characters. Currently characters outside that range are all getting messed up. Edit (4/10/2015): Replaced package with B2 revision which fixes a bug in the installation files for the cRIO-903x targets. oglib_lvzip-4.1.0-b2.ogp
  7. This can't be! The DLL knows nothing about if the caller provides a byte buffer or a uInt16 array buffer and conseqently can't interpret the pSize parameter differently. And as ned told you this is basically all C knowledge. There is nothing LabVIEW can do to make this even more easy. The DLL interface follows C rules and those are both very open (C is considered only slightly above assembly programming) and the C syntax is the absolute minimum to allow a C compiler to create legit code. It is and was never meant to describe all aspects of an API in more detailed way than what a C compiler needs to pass the bytes around correctly. How the parameters are formated and used is mostly left to the programmer using that API. In C you do that all the time, in LabVIEW you have to do it too, if you want to call DLL functions. LabVIEW uses normal C packing rules too. It just uses different default values than Visual C. While Visual C has a default alignment of 8 bytes, LabVIEW uses in the 32 bit Windows version always 1 byte alignment. This is legit on an x86 processor since a significant amount of extra transistors have been added to the operand fetch engine to make sure that unaligned operand accesses in memory don't invoke a huge performance penalty. This all to support the holy grail of backwards compatibility where even the greatest OctaCore CPU still must be able to execute original 8086 code. Other CPU architectures are less forgiving, with Sparc having been really bad if you would do unaligned operand access. However on all other current platforms than Windows 32 Bit, including the Windows 64 Bit version of LabVIEW, it does use the default alignment. Basically this means that if you have structures in C that are in code compiled with default alignment you need to adjust the offset of cluster elements to align on the natural element size when programming for LabVIEW 32 bit, by possibly adding filler bytes. Not really that magic. Of course a C programmer is free to add #pragma pack() statements in his source code to change the aligment for parts or all of his code, and/or change the default alignment of the compiler through a compiler option, throwing of your assumptions of Visual C 8 byte default alignment. This special default case for LabVIEW for Windows 32 Bit does make it a bit troublesome to interface to DLL functions that uses structure parameters if you want to make the code run on 32 Bit and 64 Bit LabVIEW equally. However so far I have always solved that by creating wrapper shared libraries anyhow and usually I also make sure that structures I use in the code are really platform independent by making sure that all elements in a structure are aligned explicitedly to their natural size.
  8. A proper API would specifically document that one can call the function with a NULL pointer as buffer to receive the necessary buffer size to call the function again. But that thing about that you have to specify the input buffer size in bytes but get back the returned characters would be a really brain damaged API. I would check again! What happens if you pass in the number of int16 (so half the number of bytes)? Does it truncate the output at that poistion? And you still should be able to define it as an int16 array. That way you don't need to decimate it afterwards.
  9. LabVIEW takes the specification you set in the Call Library Node pretty literally. For C strings it means that it will parse the string buffer on the right side of the node (if connected) for a 0 termination character and then convert this string into a LabVIEW string. For a Pascal string it interprets the first byte in the string as a length and then assumes that the rest of the buffer contains as much characters (although I would hope that it uses an upper bounding of the buffer size as it was passed in on the left side). Since your "String" contains embedded 0 bytes, you can not let LabVIEW treat it as a string but instead have to tell it to treat it as binary data. And a binary string is simply an array of bytes (or in this specific case possibly an array of uInt16) and since it is a C pointer you have to pass the array as an Array Data Pointer. You have to make sure to allocate the array to a size big enough for the function to fill in its thing (and probably pass in that size in pSize so the function knows how big the buffer is it can use) and on return resize the array buffer yourself to the size that is returned in pSize. And you have of course to make sure that you treat the pSize correctly. This is likely the number of characters so if this is an UTF16 string then it would be equal to the number of uInt16 elements in the array (if you use a byte array instead on the LabVIEW side the size in LabVIEW bytes would be likely double that of what the function considers as size). But note the likely above! Your DLL programmer is free to require a minimum buffer size on entry and ignore pSize altogether, or treat pSize as number of bytes, or even number of apples if he likes. This information must be documented in the function documentation in prosa text and can not be specified in the header file in any way. Last but not least you will need to convert the UTF16 characters to a LabVIEW MBCS string. If you have treated it as uInt16 array, you can basically scan the array for values that are higher than 127. These would need to be treated specially. If your array only contains values up to and including 127 you can simply convert them to an U8 byte and then convert the resulting byte array to a LabVIEW string. And yes values above 128 are not directly translatable to ASCII. There are special translation tables that can get pretty involved especially since they depend on your current ANSI codepage. The best would be to use the Windows API WideCharToMultiByte() but that is also not a very trivial API to invoke through the Call Library Node. On the dark side you can find some more information here about possible solutions to do this properly. The crashing is pretty normal. If you deal with the Call Library Node and tell LabVIEW to pass in a certain datatype or buffer and the underlaying DLL expects something else there is really nothing LabVIEW can do to protect you from memory corruption.
  10. It is quite unclear to me what you try to do here. Can you provide some VI code here and explain in more detail what your problem is? If you have a 2D array of uIn16 values I would think this to be your compressed image data. This can be saved as binary stream but of course would be only useful for your application that knows through which decompressing function to run it through to get the uncompressed data again. If you want to save it in a way so other applications can read it then you may be on the wrong track here. Generally while specific compression algorithms can be interesting to use, there is a lot more involved in making sure that such an image file is readable in other applications too. For that there have been specific compressable image formats such as JPEG, JPEG2K, PNG and TIFF, although TIFF is probably the worst of all in terms of interoperability as it allows in principle for an unlimited amount of image formats and compressions to be used, yet there is no application in the world which could read all possible TIFF variants.
  11. That very much depends on your C# executable. Is it a simple command line tool which you can run with a command line parameter such as "on" and "off" and will return immediately keeping your force feedback controller running through some other background task? If your executable needs to keep running in order for the force feedback controller to stay on you can not really use System Exec here. An other idea would be to interface to whatever API your C# application uses directly from LabVIEW. I would assume you use some .Net interface in that C# application to control your force feedback controller so there shouldn't be a big problem in accessing that API from LabVIEW directly and skipping your C# Form application altogether. It is indeed not a good idea to try to interface a .Net Form application to another application through either .Net calls or command line calls, as your Form application is an independent process that wouldn't lend itself to such a synchronous interface as .Net calls or command line calls would impose. If you consider the direct .Net interface to worrisome for some reasons (would be interesting to hear why you think so) another option might be to add an interprocess communication interface to your C# application such as a TCP/IP server interface through which external applications can send commands to your application to control the controller.
  12. I'm afraid there is no real way to get the current LabPython to work with Python 3.0 without a recompile (and as it seems modification of the shared library code to the new Python 3.x C API. It may be trivial as just missing the Py_ZeroStuct export but it could be also a lot more complicated.
  13. While an interesting project it looks pretty dead. No commits or any other activity in github since March 2012 after only about one year of real activity! An impressive amount of codebase but that is at the same time a problem for anyone else but the original creator to pick it up again. I don't necessarily consider Java to be bad even though it is not my preferred programming language. But things like plugging in LLVM for native code generation would be pretty hard that way.
  14. Duplicate post here. Refer to the answer there. BTW: the original topic is only related in respect to the PostLVUserEvent() function but explicitly was about a quite different and more practical data structure.
  15. Well, Excel is (mostly) Unicode throughout so it can simply display any language that can be represented in UTF-16 (the Unicode encoding scheme used by Windows). LabVIEW is NOT Unicode (aside from the unsupported option to enable UTF-8 support in it but that is an experimental feature with lots of difficulties that make a seamless operation very difficult). As such LabVIEW uses whatever MBCS (MultiByte Character Set) that your language setting in the International Control Panel defines. When LabVIEW invokes ActiveX methods in the Excel Automation Server the strings are automatiically translated from the Excel Unicode format to the LabVIEW MBCS format. But Unicode to MBCS can be lossy since no MBCS coding scheme other than UTF-8 (which can also be considered MBCS) can represent every Unicode character. But Windows doesn't allow to define UTF-8 to be set as system MBCS encoding unlike Linux. So if your Excel string contains characters that can not be translated to characters in the current MBCS of the system you get a problem. There is no simple solution to this problem, otherwise NI and many others would have done it long ago. Anything that can be thought out for this will in some ways have drawbacks elsewhere.
  16. Well, Scan from String is not meant to be used like that, just as the scanf() functions in C. They consume as many characters as match the format specifier and then stop. They only fail if no character matches the format specifier or if literal strings in the format string don't match. While I would usually use Match Pattern for such things, Scan From String can be used too. If you want to know if the whole string was consumed you just have to check that the offset past scan is equal to the string length just as Darin suggested or that the remaining string is empty.
  17. Traditionally all references needed to be closed. This changed around LabVIEW 7.1 for control references. But closing them anyways doesn't hurt as the Close Reference is simply a noop for them. A quicker way to check if you cause memory leaks by not closing a reference is to just typecast it into an int32 and execute it twice. If the numeric value doesn't chance it is a static refnum and doesn't need to be closed. But generally I never do that. The thumb of rule is simple, iIf it is a control reference it doesn't need to be closed, anything else needs closing. And closing a control reference that doesn't need to be closed doesn't hurt so err on the safe side. Ohh and as far as performance difference: You really can spend your valuable developer time much better than worrying about such sub microsecond optimizations. The difference is likely negligable, and definitely to small for me to worry about.
  18. Seems to me you want to scan a hexadecimal integer. That would be %x.
  19. You don't get that guarantee anyways on Windows. Last time I did a check, the lowest timing resolution of loops under Windows was 10 ms, no matter if I used Timed loops or normal Wait Until Next Multiple ms. With a wait of 0ms I could make it go pretty fast but the Timed Loop barked on the 0ms value. A wait of less than 10ms definitely showed the discrete nature of the interval being usually near 10ms or almost 0ms if the code inside allowed the near 0ms delay. And no there was no guarantee that it would never take more than 10ms, there were always out-layers above 10ms if I did test for more than a few seconds.
  20. You need to be a lot more specific. What is the problem? Why do you not want to use VISA? Did you write that DLL? Do you know C? For one, DLL interfacing requires a lot of C intimate knowledge to get it right. If you haven't that knowledge using VISA is definitely going to be much less painful and especially won't crash your application continuously during development (and quite possibly also later during execution of your application since any even small error in interfacing a DLL will sooner or later cause problems in the form of bad behavior or crashes). If you still want to go down the DLL route we need to know what doesn't work, what you have exactly tried and of course the entire documentation and header files for the DLL too, in order to even have a small chance of helping you. A DLL is simply a bunch of compiled code. There is no way to determine from the DLL alone what function interfaces it contains nor what the correct parameter types of those functions are. And last but not least even the header file of the DLL doesn't describe how buffers need to be allocated and freed when calling the function so there usually needs to be some extra prosa text documentation too that describes this. Your first post is analogous to me posting a picture of my car and saying it doesn't work, please help me! only clairvoyant people with an understanding of car technology could possible have an idea what might be the problem. But my crystal ball is currently in repair and I need to do it with what other mere mortals have available .
  21. Definitely echo JKSH's post. As a rule your diagram should not use more than one screen. That makes life a lot easier not only for nudging out those wire bends. This rule was even valid back in those days when you had only 1024 x 768 pixel screens.
  22. You should post in the correct forum topic. This is for providing code that others can use, not for asking questions about (possible) errors.
  23. "IDN?" is a IEEE-488.2 specific command. Only devices implementing this protocol will recognize it and answer meaningfully. Most serial devices out there will NOT implement IEEE-488.2. Only typical measurement devices from manufacturers also providing GPIB options with their devices usually go to the extra length of implementing a real standard. Everybody else simply cooks up his own protocol. Why don't you try to send the commands to your device that you also tried with HyperTerminal? Just don't forget that HyperTerminal will be default convert your Enter key that you press to send of the command, into a carriage return, line feed, or carriage return/line feed character sequence and send it at the end of the data. You have to append the according \r, \n, or \r\n sequence to your command in MAX to achieve the same. And also do this same thing when you write data in LabVIEW using VISA Write.
  24. Basically changing the value over Active X is equivalent to the VI Server method "Control Value.Set" which takes a label name and the value as a variant and then "pastes" it into the control. This does indeed not cause an event. The OP probably detects this by polling the controls for value changes and then generates a Value Change (signaling) event. If the user now operates the frontpanel, there will be an event from the user interaction and another one from the brat VI detecting a value change. Basically there should be a way in the event loop to detect if it is an UI value change or a property value change. But I agree that an even better approach would be to route the control from the other app through its own "hidden" interface.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.