Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. A proper API would specifically document that one can call the function with a NULL pointer as buffer to receive the necessary buffer size to call the function again. But that thing about that you have to specify the input buffer size in bytes but get back the returned characters would be a really brain damaged API. I would check again! What happens if you pass in the number of int16 (so half the number of bytes)? Does it truncate the output at that poistion? And you still should be able to define it as an int16 array. That way you don't need to decimate it afterwards.
  2. LabVIEW takes the specification you set in the Call Library Node pretty literally. For C strings it means that it will parse the string buffer on the right side of the node (if connected) for a 0 termination character and then convert this string into a LabVIEW string. For a Pascal string it interprets the first byte in the string as a length and then assumes that the rest of the buffer contains as much characters (although I would hope that it uses an upper bounding of the buffer size as it was passed in on the left side). Since your "String" contains embedded 0 bytes, you can not let LabVIEW treat it as a string but instead have to tell it to treat it as binary data. And a binary string is simply an array of bytes (or in this specific case possibly an array of uInt16) and since it is a C pointer you have to pass the array as an Array Data Pointer. You have to make sure to allocate the array to a size big enough for the function to fill in its thing (and probably pass in that size in pSize so the function knows how big the buffer is it can use) and on return resize the array buffer yourself to the size that is returned in pSize. And you have of course to make sure that you treat the pSize correctly. This is likely the number of characters so if this is an UTF16 string then it would be equal to the number of uInt16 elements in the array (if you use a byte array instead on the LabVIEW side the size in LabVIEW bytes would be likely double that of what the function considers as size). But note the likely above! Your DLL programmer is free to require a minimum buffer size on entry and ignore pSize altogether, or treat pSize as number of bytes, or even number of apples if he likes. This information must be documented in the function documentation in prosa text and can not be specified in the header file in any way. Last but not least you will need to convert the UTF16 characters to a LabVIEW MBCS string. If you have treated it as uInt16 array, you can basically scan the array for values that are higher than 127. These would need to be treated specially. If your array only contains values up to and including 127 you can simply convert them to an U8 byte and then convert the resulting byte array to a LabVIEW string. And yes values above 128 are not directly translatable to ASCII. There are special translation tables that can get pretty involved especially since they depend on your current ANSI codepage. The best would be to use the Windows API WideCharToMultiByte() but that is also not a very trivial API to invoke through the Call Library Node. On the dark side you can find some more information here about possible solutions to do this properly. The crashing is pretty normal. If you deal with the Call Library Node and tell LabVIEW to pass in a certain datatype or buffer and the underlaying DLL expects something else there is really nothing LabVIEW can do to protect you from memory corruption.
  3. It is quite unclear to me what you try to do here. Can you provide some VI code here and explain in more detail what your problem is? If you have a 2D array of uIn16 values I would think this to be your compressed image data. This can be saved as binary stream but of course would be only useful for your application that knows through which decompressing function to run it through to get the uncompressed data again. If you want to save it in a way so other applications can read it then you may be on the wrong track here. Generally while specific compression algorithms can be interesting to use, there is a lot more involved in making sure that such an image file is readable in other applications too. For that there have been specific compressable image formats such as JPEG, JPEG2K, PNG and TIFF, although TIFF is probably the worst of all in terms of interoperability as it allows in principle for an unlimited amount of image formats and compressions to be used, yet there is no application in the world which could read all possible TIFF variants.
  4. That very much depends on your C# executable. Is it a simple command line tool which you can run with a command line parameter such as "on" and "off" and will return immediately keeping your force feedback controller running through some other background task? If your executable needs to keep running in order for the force feedback controller to stay on you can not really use System Exec here. An other idea would be to interface to whatever API your C# application uses directly from LabVIEW. I would assume you use some .Net interface in that C# application to control your force feedback controller so there shouldn't be a big problem in accessing that API from LabVIEW directly and skipping your C# Form application altogether. It is indeed not a good idea to try to interface a .Net Form application to another application through either .Net calls or command line calls, as your Form application is an independent process that wouldn't lend itself to such a synchronous interface as .Net calls or command line calls would impose. If you consider the direct .Net interface to worrisome for some reasons (would be interesting to hear why you think so) another option might be to add an interprocess communication interface to your C# application such as a TCP/IP server interface through which external applications can send commands to your application to control the controller.
  5. I'm afraid there is no real way to get the current LabPython to work with Python 3.0 without a recompile (and as it seems modification of the shared library code to the new Python 3.x C API. It may be trivial as just missing the Py_ZeroStuct export but it could be also a lot more complicated.
  6. While an interesting project it looks pretty dead. No commits or any other activity in github since March 2012 after only about one year of real activity! An impressive amount of codebase but that is at the same time a problem for anyone else but the original creator to pick it up again. I don't necessarily consider Java to be bad even though it is not my preferred programming language. But things like plugging in LLVM for native code generation would be pretty hard that way.
  7. Duplicate post here. Refer to the answer there. BTW: the original topic is only related in respect to the PostLVUserEvent() function but explicitly was about a quite different and more practical data structure.
  8. Well, Excel is (mostly) Unicode throughout so it can simply display any language that can be represented in UTF-16 (the Unicode encoding scheme used by Windows). LabVIEW is NOT Unicode (aside from the unsupported option to enable UTF-8 support in it but that is an experimental feature with lots of difficulties that make a seamless operation very difficult). As such LabVIEW uses whatever MBCS (MultiByte Character Set) that your language setting in the International Control Panel defines. When LabVIEW invokes ActiveX methods in the Excel Automation Server the strings are automatiically translated from the Excel Unicode format to the LabVIEW MBCS format. But Unicode to MBCS can be lossy since no MBCS coding scheme other than UTF-8 (which can also be considered MBCS) can represent every Unicode character. But Windows doesn't allow to define UTF-8 to be set as system MBCS encoding unlike Linux. So if your Excel string contains characters that can not be translated to characters in the current MBCS of the system you get a problem. There is no simple solution to this problem, otherwise NI and many others would have done it long ago. Anything that can be thought out for this will in some ways have drawbacks elsewhere.
  9. Well, Scan from String is not meant to be used like that, just as the scanf() functions in C. They consume as many characters as match the format specifier and then stop. They only fail if no character matches the format specifier or if literal strings in the format string don't match. While I would usually use Match Pattern for such things, Scan From String can be used too. If you want to know if the whole string was consumed you just have to check that the offset past scan is equal to the string length just as Darin suggested or that the remaining string is empty.
  10. Traditionally all references needed to be closed. This changed around LabVIEW 7.1 for control references. But closing them anyways doesn't hurt as the Close Reference is simply a noop for them. A quicker way to check if you cause memory leaks by not closing a reference is to just typecast it into an int32 and execute it twice. If the numeric value doesn't chance it is a static refnum and doesn't need to be closed. But generally I never do that. The thumb of rule is simple, iIf it is a control reference it doesn't need to be closed, anything else needs closing. And closing a control reference that doesn't need to be closed doesn't hurt so err on the safe side. Ohh and as far as performance difference: You really can spend your valuable developer time much better than worrying about such sub microsecond optimizations. The difference is likely negligable, and definitely to small for me to worry about.
  11. Seems to me you want to scan a hexadecimal integer. That would be %x.
  12. You don't get that guarantee anyways on Windows. Last time I did a check, the lowest timing resolution of loops under Windows was 10 ms, no matter if I used Timed loops or normal Wait Until Next Multiple ms. With a wait of 0ms I could make it go pretty fast but the Timed Loop barked on the 0ms value. A wait of less than 10ms definitely showed the discrete nature of the interval being usually near 10ms or almost 0ms if the code inside allowed the near 0ms delay. And no there was no guarantee that it would never take more than 10ms, there were always out-layers above 10ms if I did test for more than a few seconds.
  13. You need to be a lot more specific. What is the problem? Why do you not want to use VISA? Did you write that DLL? Do you know C? For one, DLL interfacing requires a lot of C intimate knowledge to get it right. If you haven't that knowledge using VISA is definitely going to be much less painful and especially won't crash your application continuously during development (and quite possibly also later during execution of your application since any even small error in interfacing a DLL will sooner or later cause problems in the form of bad behavior or crashes). If you still want to go down the DLL route we need to know what doesn't work, what you have exactly tried and of course the entire documentation and header files for the DLL too, in order to even have a small chance of helping you. A DLL is simply a bunch of compiled code. There is no way to determine from the DLL alone what function interfaces it contains nor what the correct parameter types of those functions are. And last but not least even the header file of the DLL doesn't describe how buffers need to be allocated and freed when calling the function so there usually needs to be some extra prosa text documentation too that describes this. Your first post is analogous to me posting a picture of my car and saying it doesn't work, please help me! only clairvoyant people with an understanding of car technology could possible have an idea what might be the problem. But my crystal ball is currently in repair and I need to do it with what other mere mortals have available .
  14. Definitely echo JKSH's post. As a rule your diagram should not use more than one screen. That makes life a lot easier not only for nudging out those wire bends. This rule was even valid back in those days when you had only 1024 x 768 pixel screens.
  15. You should post in the correct forum topic. This is for providing code that others can use, not for asking questions about (possible) errors.
  16. "IDN?" is a IEEE-488.2 specific command. Only devices implementing this protocol will recognize it and answer meaningfully. Most serial devices out there will NOT implement IEEE-488.2. Only typical measurement devices from manufacturers also providing GPIB options with their devices usually go to the extra length of implementing a real standard. Everybody else simply cooks up his own protocol. Why don't you try to send the commands to your device that you also tried with HyperTerminal? Just don't forget that HyperTerminal will be default convert your Enter key that you press to send of the command, into a carriage return, line feed, or carriage return/line feed character sequence and send it at the end of the data. You have to append the according \r, \n, or \r\n sequence to your command in MAX to achieve the same. And also do this same thing when you write data in LabVIEW using VISA Write.
  17. Basically changing the value over Active X is equivalent to the VI Server method "Control Value.Set" which takes a label name and the value as a variant and then "pastes" it into the control. This does indeed not cause an event. The OP probably detects this by polling the controls for value changes and then generates a Value Change (signaling) event. If the user now operates the frontpanel, there will be an event from the user interaction and another one from the brat VI detecting a value change. Basically there should be a way in the event loop to detect if it is an UI value change or a property value change. But I agree that an even better approach would be to route the control from the other app through its own "hidden" interface.
  18. Well, it seems I should be able to do something here, sans Mac support, if I leave all the encoding stuff out. There is a potential problem about extended ASCII characters in filenames inside the archive that has existed since the beginning and caused some problems in the past that I was trying to tackle. But no matter what I try to do, it always turns out to cause problems somewhere. So I have started to remove all the encoding translation from the current version in order to get a version out that should be functionally at least the same as older versions of the OpenG ZIP library (but support the new 64 bit platforms and the RT systems). It will still badly fail for filenames that contain extended characters but I'm not really anymore sure it's useful to try to fix that. I might try to add later some simple conversion function on LabVIEW level to handle those filenames a little more gracefully when extracting an existing archive (there is no way to guarantee letter for letter matching between the original archive name and the resulting file name after extraction since LabVIEW doesn't support Unicode filenames yet) but it will at least extract the files to a similar name. As I'm going to ski vacation at the end of this week I don't think I will be able to create a fully featured OpenG package but will try to post a preliminary and limited tested package here before that.
  19. I was allowed a few minutes before the exam to sit at the machine in setup the preferences to my likings. Very important as I still use many settings as they were around LabVIEW 5.1 as default and hating certain things like auto wire routing and auto tool selection.
  20. LabVIEW developers call your writers "stompers". They stomp on the memory they receive as input. The LabVIEW compiler is smart enough to not create a copy if only one data sink is a stomper. It schedules the code in such a way that all the non-stompers read the value first and then lets the stomper use the data. It goes even so far that an input can change its stomper status based if an output of that node is wired and may reuse that input value or is left unwired.
  21. As I said, they may be working for your situation. But plugins tend to be a problem sooner or later. You can't usually just include your own VIs as the vi.lib directory is not present in an application. So you need the entire hierarchy but then you can't use any standard LabVIEW function that exists or uses a VI inside an lvlib or lvclass. With many of the LabVIEW VI libraries being turned into lvlibs recently, this is a serious problem.
  22. One more advice: Don't use LLBs in your code development. It may be ok to do so for the final executable where you want to have certain libraries as container in an external file as some sort of plugin, but more likely you will want to use packed libraries for that anyways. The LLB fails badly for such solutions as soon as any of your dependent VIs are part of an lvlib or lvclass since LLBs can not contain these files so the VI inside an LLB is then broken as it misses its lvlib or lvclass file. Learned this the hard way when upgrading an older application that used LLBs as plugin. Suddenly most of the plugins were broken. Turned out that it failed as soon as a plugin called an AAL function as these got lvlibed in newer versions.
  23. I know it was Michaels post and it wasn't really directly towards you. I fully agree that Release when Ready is the more accurate term in this. And Ready can mean a lot of things as you allude to. For a consumer application it can mean the application doesn't crash if started and used normally. For a mission critical system it really means, we have tested everything we could think of and then some more and could not find anything wrong with it and every engineer working on the changes has evaluated every effect that change could have as far as humanly possible. And ideally there was a real field test on some small scale to actually test the real world interactions too. A release manager can help manage that, but in many cases is just another person in a shorter or longer list of people who exists to dilute the responsibilities if something goes wrong.
  24. A release manager wouldn't necessarily help. But the whole process of upgrading software certainly needs to be adjusted to the possible impact when something fails. Release small and often isn't the fix for this either. Even the smallest bug can cause such a mishap. And releasing small and often does increase that chance rather than decrease it. It also makes rigorous testing even more unlikely.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.