Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. While it's a breaking change to modify this now, the original of writing seconds since january 1. 1904 as a timestamp to SQLLite is truely broken too. So I would investigate if you can demote the current method as depreciated and remove it from the palettes and add a new one that does the right thing. Existing applications using that function will still work as they used too, and still use a broken timestamp while new developments would use the right method. Also document that difference somewhere for anyone wanting to read databases written with the old method to use the depreciated method for reading, but a string reminder to not use it for new development.
  2. LabVIEW's timestamp format is explicitedly defined as number of seconds since midnight January 1, 1904 GMT. There is no reason LabVIEW needs to adhere to any specific epoch. On Windows the system time epoch is number of 100 ns, since January 1, 1601, and Unix uses indeed January 1, 1970, while the MacOS used January 1, 1904 (yes LabVIEW was originally a MacOS only application! ). And as curiosa, Excel has a default time definition of number of days since January 1, 1900, although due to a mishap when defining that epoch and forgetting about that 1900 wasn't a leap year, the real epoch starts on midnight December 31, 1899. But there is a configuration option in Excel to shift this to the MacOS time epoch! It's definitely a good thing that they stick to the same time epoch on all LabVIEW platforms, and since the MacOS one was the first to be used by LabVIEW, it's a good choice to stick with that. If your API has a different epoch you have to provide functions to convert between the LabVIEW epoch and your API epoch and vice versa.
  3. It's generally correct what JKSH writes when the handle containing the handles is allocated by yourself. However if that array of CodecInfo structs comes from LabVIEW there are strict rules that can be observed and must be followed when returning such data to LabVIEW. Anything beyond the number of elements indicated in the array is uninitialized although the actual array handle space may be bigger but never smaller than needed for the number of valid elements. The only exception to this are ampty array handles which can be both either a valid handle with a number of elements equal to 0 OR a null handle itself. So when receiving handles from LabVIEW you should always check for null and depending on that do DSNewHandle/DSNewHClr or DSSetHandleSize. When resizing a handle existing elements need to be resized and appended elements always created. Removed elements when making an array smaller must always be recursively deallocated as they don't exist for LabVIEW anymore once the array length has been readjusted to a smaller size and would therefore create a memory leaks. If you allocate both the array handle as well as the contained handles inside the array in the same code section without returning in between to the LabVIEW diagram it is entirely up to you if you want to use DSNewHandle or DSNewHClr, as the values inside the newly allocated array need to be initialized anyhow explicitly. The latter requires more execution time but may be faster if you need to initialize most elements in there to 0 or an empty handle anyways. Also it may be a little safer when someone later makes modifications to the code as null pointer dereferencing has a higher chance of crashing than accessing uninitialzed pointers, so debugging is easier.
  4. I thought it would be this but that has many shortcomings. The only way to destroy an occurrence is by calling the DestroyOccur() C API in LabVIEW. However if you do that with an occurrence that was created with the Create Occurrence node that occurrence is gone and the only way to get it back is by unloading and reloading the VI that contains the Create Occurence node. Without reloading the VI, this occurrence will be invalid and immediately run into the timeout case again if you restart the VI. Not exactly a convinient thing when you are developing and starting and stopping your app repeatedly. Of course if you create the occurrence by calling the AllocOccur() API this is not an issue but then you call already two undocumented C APIs.
  5. You forgot a smiley there after the last sentence!
  6. The Create Occurrence node only executes at VI load time and that is by design. It has been that way since the inception of occurrences in LabVIEW back around LabVIEW 2.0. There is the undocumented LabVIEW manager function AllocOccur() that returns a unique occurrence refnum every time it is executed. However since around LabvIEW 6 you have notifiers, queues, and semaphores which internally do use occurrences for the signaling aspect but have extra advantages of allowing to transport some data along with the signal event that occurrences don't have. Occurrences did have a nasty feature in the past and may still have that could bite you if you are not careful. Their state would remain triggered once they were set if the Wait on Occurrence wasn't executed before the program was terminated and would immediately trigger the Wait on Occurrence on a new start of the application even if in this new run the occurrence hadn't yet been set. As to destroying the occurrence to let the Wait on Occurrence terminate, that sounds like a pretty brute force approach. Why resolve to that instead of just setting the occurrence anyways?
  7. You launch the VI inside your RT app and want its front panel to show on your host computer? That is not possible to the best of my knowledge. Your host application can have a front panel that it shows and remotely call your VI through VI server to execute some code on your RT taget, though.
  8. I'm not sure where you see hostility in those remarks. Yes it may be not sugar coated sweet talk, but hostile? I think you should reconsider this.
  9. I take an issue with that n**i word. If someone comes to you and tells you your application doesn't run, you would also try to educate him that there is a lot more information needed in order for you to mean something for him, wouldn't you? In this case it is not even software I wrote so how would we have been able to even guess from the first two posts of the OP that GPIB might be involved? Nor did he provide any information about his application other than that it caused a specific error number. Posting in OpenG made me actually guess at first it may be related to some OpenG toolkit function, otherwise I had left the post entirely alone in the first place.
  10. Well, it is already pretty helpful to know that the function uses GPIB, but it would be even more helpful to see what is actually inside that VI. Most likely it uses the old GPIB function interface and there error 6 means actually that the GPIB IO operation was aborted. That is a somewhat generic error for GPIB operations that the GPIB controller detected some kind of error and has aborted the transfer because of that. Check out this document for a list of possible GPIB errors and what it could mean. As to posting a photograph of a screen shot, well, posting the actual VI AND subVI would have taken less work for you and would be about 100 times more useful to be able to see how the VI that causes the error is built up. We still only can guess that it is probably the GPIB function causing that error.
  11. We don't know your ion gauge, nor what it does and how it functions. It could be that the instrument driver for it simply returns its own errors as LabVIEW errors so that the actual numeric value means something completely different than what one would expect from LabVIEW functions. It can also try to read some file somewhere. All guess work if you don't provide a LOT MORE information about your program, hardware and source code. You usually don't go to a car shop and tell the mechanic that your car doesn't work and if he could tell you what the problem is without taking the car with you so he can have an actual look at it, do you? You know more about your software than anyone here does, so you need to tell us as much as possible about it in order to allow us to help you! As to where to post, I notified a moderator and they moved the thread into the General area, where you should have posted it in the first place.
  12. I didn't mean external hardware but what is build into the system. A memory module, a built in HD, the system board. Together with a less than perfect thermal design that can mean that the hardware gets hot enough that some less than perfect hardware gets into soft errors.
  13. Definitely not an Instrument Driver thing. The only thing besides faulty hardware that can still create BSODs in Windows are faulty device drivers. They run in the kernel space of Windows and there is really nothing that Windows can do to prevent a kernel driver from corrupting its memory. That is why 64 Bit Windows (and the latest versions of MacOS X) by default only allow to load signed drivers. In order to get a driver signed the manufacturer has to submit it to some test review process and that tries to make sure the driver conforms to certain test scenarios in order to make sure it will work flawlessly in all but the very most extreme exceptions. Today a BSOD is a pretty reliable sign that you have some hardware problem in your system, such as a bad harddisk, PCI bridge, or memory that can cause temporary dropouts.
  14. Error 6 is a permission error. That can have many reasons and without knowing which function gives you this error there is really not much we can do to help you. Most likely it is a file permission error. So maybe you login with a different user than before or your system administrator changed the access rights for your user account. By the way: You posted in the wrong forum too. This seems to have nothing to do with OpenG functionality at all, so should have been posted in one of the more general forums.
  15. One possibility is to use the Call Chain and reference the latest element in the resulting array. That is the top level VI of the current call chain, so might not be the top level of the entire application (when you use VI Server->Run VI or Call Asynchronous, but should be close enough). Otherwise there should be an "App->Parent Window for Dialogs" property but last time I checked that always returned a 0 handle, which for most Windows APIs requiring a parent handle would mean that it is system modal as the 0 HWND is treated as a shortcut for the Desktop window, which is the parent of any other window on the desktop.
  16. That's pretty harsh! The source code is on sourceforge and there is nobody preventing other people from accessing it and attempt to port it to 64 Bit LabVIEW. It won't be easy but given enough determination it is certainly doable.
  17. It might be possible but it is far from just a recompilation of the code. The code was written at a time where nobody was thinking about 64 bit CPUs and no standards existed how one should prepare for that eventualiity. Also it is possible that the script interface in LabVIEW has been cleansed of support for older API standards for the 64 bit version. LabPython uses the first version of that API, but if the 64 bit version of LabVIEW doesn't support that anymore, then the new version documentation would need to be gotten from NI. This is not an official public API.
  18. As Shaun already elaborated, the issue is always about the LabVIEW bitness, not the underlaying OS (well, you can't run LabVIEW 64 bit on 32 bit Windows but that is beyond the point here ).
  19. The problem is certainly not endianess. LabVIEW uses internally whatever endianess is the prefered one for the platform. Endianess only comes into the picture when you flatten/unflatten (and typecast is a special case of flatten/unflatten) data into differently sized values. This is why you need to byte swap here. Your issue about seemingly random data is most likely alignment. LabVIEW on x86 always packs data as tight as possible. Visual C uses a default alignment of 8 bytes. You can change that with pack() pragmas though.
  20. Well, I'm not sure what your hourly rate is. But a LabVIEW upgrade is almost certainly cheaper than trying to recreate that effort for yourself, if you intend to use the result in anything that is even remotely commercial.
  21. You wouldn't need all that swapping if you had placed the natural sized element in that cluster, an int64 for 64 bit LabVIEW and an int32 for 32 bit LabVIEW. For the 0 elements in your cluster it doesn't matter anyhow. Now you have a dialog with a title and a single string in it and it already looks complicated. Next step is to make it actually usefull by filling in the right data in all the other structure elements!
  22. of course that won't work! That sqlite3 **ppDb parameter is a pointer to a pointer. The inner pointer is really a pointer to a structure that is used by sqlite to maintain information and state for a database, but you as caller of that API should not be concerned about that contents. It's enough to treat that pointer as a pointer sized integer that is passed to other sqlite APIs. However in order for the function to be able to return that pointer it has to be passed as reference, hence the pointer to pointer. Change that parameter to be a pointer sized integer, Passed by reference (as pointer) and things look a lot different. However, seeing you struggle with such basic details, it might be a good idea to checkout this here on this site. Someone else did already all the hard work of figuring out how to call the sqlite API in LabVIEW.
  23. I would tag it übernasty. Really, it's an API that I find even nasty to use from C directly. Very likely there is. With all it's own complexeties such as correct .Net version that needs to be installed and instantiated by LabVIEW on startup. But the interfacing is made pretty easy since .Net provides a standard to describe the API sufficiently enough for LabVIEW to do the nasty interface conversion automatically.
  24. Problem is that this structure is pretty wieldy, uses Unicode strings throughout and many bitness sensitive data elements. Basically, each LP<something> and H<something> in there is a pointer sized element, meaning a 32 bit integer on LabVIEW 32 bit and 64 bit integer on LabVIEW 64 bit. Same for the P<something> and <datatype> *<elementname> things. and yes they have to be treated as integer on LabVIEW diagram level since LabVIEW doesn't have other datatypes on its diagram level that directly correspond to these C pointers. Not to talk about the unions! So you end up with at least two totally different LabVIEW clusters to use for the two different platforms (and no don't tell me you are sure to only use this on one specific platform, you won't! ) Trying to do this on LabVEW diagram niveau basically means that you not only have to figure out how to use all those different structure elements properly (a pretty demanding task already) but also play C compiler too, by translating between LabVIEW datatypes and their C counterparts properly. You just carved out for you a pretty in depth crash course in low level C compiler details. IMHO that doesn't weight up against a little unhappyiness that LabVIEW dialogs don't look exactly like some hyped defacto Microsoft user interface standard that keeps changing with every new Windows version, and that I personally find changing to the worse with almost every version.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.