Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Maybe I was a tadbit to modest here. Thinking about it you are of course right. FGVs are powerful and are easier to learn for someone not knowing much about OOP. The problem is that without some OOP knowledge such a person is likely to either get stuck at the "set/get FGV with a little extra functionality level", or starting to create FGV monsters at least in the beginning. So while the initial learning curve to start using FGVs is fairly easy, doing the real powerful designs is just as a steep learning curve than learning LVOOP, with the difference that LVOOP comes with some tools right in the LabVIEW IDE to ease the more automatic tasks and FGVs generally have to be created each time manually. Also the separation of methods and data is a definitive advantage and thanks to the project integration also easy to manage.
  2. While I certainly also am among the people who should attend the aformentioned LAA group, I do not try to hide these bends. Alignment were it is possible yes, otherwise leave it. I prefer to see that the wire goes indeed to the terminal that it looks like and not some other one, even if that alignment may only be off one pixel. Nothing as frustrating for me but connector panes (pains) that are chaotic or wires going into an icon other than where they really connect.
  3. You should mention that you have posted elsewhere too (NI forum) for this, as that can help people to see if they can add anything useful to the thread, instead of repeating what others already said. Also it is a good way of getting additional references for anyone coming across similar problems in the future and coming here instead of the NI forums.
  4. That is a somewhat strong simplification! Technically you are right, conceptually AE it is a completely upside down way of doing OOP. OOP is about encapsulating the data which the methods can work with while AE is about encapsulating the data AND the methods in one place. The data is always together with the methods which makes things like instantiation a bit problematic. There is also the aforementioned problem of the conpane which is not infinitely expandable. While this is a limit, I haven't found it a limit in the sense that I could not do things I wanted to do. And the side effect is that it makes you think more about extending such an "object". And that is usually always a good thing (except sometimes for project deadlines). As to the code bloat, as soon as you start to do accessor wrappers for the individual AE methods, you go down the same road. AEs only work by discipline from the implementor and the user (unless you wrapped them at which point the AE implementation gets a fact that a user should not interest at all anymore). LVOOP works by certain contracts that the development environment and compiler impose on both the implementor and user of the class. You can make a similar point (albeit only in the aspect of implementing one with the other) between C and C++. You can write object oriented code in C just as well but you have no support by the compiler environment for that. Everything beyond the normal C rules has to be done by discipline of the programmer, rather than by the compiler checking that object classes are indeed compatible and can be casted from one to the other class, just to name an example. Also inheritance is a nice feature in OOP, as it allows to easily implement variations on a theme. At the same time, it is also one of the more abused features in many OOP designs. As soon as you find yourself trying to prop a potato class in a car interface, you should realize that you probably have just created a mutant monster that will eventually chase you in your worst nightmares. Inheritance in an AE context on the other side is simply not feasible. But I would certainly agree that anybody claiming AEs to be generally inferior to classes is simply ignorant. They can be created and used very successfully, if you have your mind properly wrapped around them. I would however hesitate to claim that they are worth to learn at this point instead of LVOOP. As an additional tool in a programmers toolkit they are however still a very valuable and powerful addition to any LabVIEW programmer expertise.
  5. Basically this whole discussion of perceived differences between LV2Global, FGV, Action Engine, or IGV (Intelligent Global Variable) are a bit academic. Traditionally the LV2 style global were the first incarnation of this pattern and indeed in the beginning mostly just with get/set accessor methods. However smart minds soon found the possibiity to also encapsulate additional methods into the LV2 style global without even bothering to find a new name for this. In the over 25 years of LabVIEW use new terms have arisen, often more to just have a new term, rather than describe a fundamentally different design pattern. As such these names are in practice quite interchangeable as different people will tend to use different terms for exactly the same thing. Especially the distinction between FGV/IGV and AE feels a bit artificial to me. The claimed advantage of AE's to have no race conditions is simply by discipline of the programmer, both of the implementer as well as the user. There is nowhere an official document stating "AEs shall not have any possibility to create race conditions" and it would be impractical as that would for instance mean to completely disallow any set and get alike method altogether, as otherwise race conditions still can be produced by a lazy user who rather prefers to implement his algorithm to modify data around the AE, rather than move it into a new method inside. I would agree that LV2style globals are a bit of an old name and usually mean the set/get method, but they do not and have not excluded the possibility to add additional methods to it, to make it smarter. For the rest, FGV, IGV, AE and what else has come up, are often used interchangeably by different persons, and I do not see a good cause in trying to force an artificial difference between them. Daklu wrote: Well it is true there is a limit to the conpane, and one rule of thumb I use is that if the FGV/AE requires more than the 12 terminal conpane (that includes the obligatory error clusters and method selector), it has become to unwieldy and the design needs to be reviewed. I realize that many will say, ohh that additional work to refactor such an FGV/AE when this happens and yes it is work, sometimes quite a bit in fact, but it will also in-evidently result in refactoring parts of the project that have themselves become unwieldy. With OOP you can keep adding more and more methods and data to an object until even the creator can't really comprehend it anymore logically, and it still "works". The FGV has a natural limit which I don't tend to hit anymore nowadays and that while my overall applications haven't gotten simpler. Michael Avaliotis wrote: You bet I do! Haven't digged into LVOOP yet, despite knowing some C++ and quite a bit Java/C#. Daklu wrote: I think it has a lot to do with how your brain is wired. AEs and LVOOP are trying to do similar things in completely contrary ways. I would agree that AE's are not a good solution if you know LVOOP well, but I started with FGV/AEs loooooooong before LVOOP was even a topic that anyone would have thought about. And in that process I went down several times a path that I found to be a dead end, refining the process of creating AE's including to define self imposed rules to keep it all managable for my limited brain capacity. They work for me amazingly well and allowed me often to redefine functionality of existing applications by simply extending some AE's. This allowed to keep the modifications localized to a single component and its support functions rather than have to sprinkle around changes throughout the application. The relatively small adaptions in the interface were easily taken care off since the LabVIEW strict datatype paradigm normally pointed out the problematic spots right away. And yes I'm a proponent of making sure that the LabVIEW VIs who make use of a modified component will break in some ways, so one is forced to review those places at least once to see if there is a potential problem with the new addition. A proper OOP design would of course not need that since the object interface is well designed from the start and never will introduce incompatibilities with existing code when it gets extended . But while that is the theory I found that in OOP I tend to extend things sometimes, only to find out that certain code that makes use of the object will suddenly break in very subtle and sometimes hard to find ways, while if I had been forced to review all callers at the time I added the extension I would have been much more likely to identify the potential problem. Programming AEs is a fundamentally different (and I certainly won't claim it to be superior) paradigm to LVOOP. I'm aware that it is much less formalized, requires quite some self discipline to use properly, but many of my applications over the years would not have been possible to implement in a performant way without them. And as mentioned a lot of them date from before the time when LVOOP would even have been an option. Should I change to LVOOP? Maybe, but that would require quite a learning curve and maybe more importantly relearning quite a few things that work very well with AE but would be quite a problem with LVOOP. I tend to see it like this: Just like with graphical programming vs. textual programming, some brains have a tendency towards one or the other, partly because of previous experience, partly because of training. I trained my brain over about 20 years in programming AEs. Before I could program the same functionality in LVOOP as I do nowadays in an AE, would require me quite a bit more than weeks. And I still would have to do a lot of LVOOP before I would have found what to do and what to avoid. Maybe one of the problems is that the first time I looked at LVOOP turned out to be a very frustrating experience. For some reasons I can fairly easily accept that LabVIEW crashes on me because of errors I did in an external C component, but I get very upset if it crashes on me because I did some seemingly normal edit operation in the project window or such.
  6. I compiled a version but that crashed, so left it at that for the time being. No use in releasing something that does not work. Should get some better test setups soon so that debugging will work more easily. And the Pipes library was never officially released, so we never ported it over from the CVS repository to the SVN one. It's still in the CVS only.
  7. As with all OpenG sources, they are on the OpenG Toolkit project page on sourceforge. All of them!
  8. "Never" would seem a very strong statement to me. See the OpenG LabPython, LVZIP and Pipe library just to name a few. It seems the person having done the vxcan API wrapper did indeed "forget" to add the C code to the download, especially since that wrapper doesn't really consist of any magic at all, but simply some C to LabVIEW parameter mapping. I fully understand that providing multiple platform wrappers can be a real pain in the ass, which would make it a good idea to add the C source of those wrappers, so others can recompile for new platforms, but doing everything on the LabVIEW level is not a maintainable solution in the long run at all. Usually APIs are anyhow different enough between platforms that a pure LabVIEW wrapper gets a real pain to do, such that it works on multiple platforms, unless the API developer kept in mind to keep the API binary consistent between platforms.
  9. Unless you want to hack into the import library wizard VI code yourself (and create a maintenance nightmare since there is no publically documented VI API so far AFAIK) I don't believe there is currently an option. And the command line approach does not seem to me the ideal way of creating such an interface, since the import library wizard potentially requires an entire page of possible command line parameters if you consider things like header directories, defines, etc.
  10. That is not cheating but the proper course of action unless you enjoy playing C compiler yourself and create a badly maintainable VI.
  11. Shaun, in theory you are right. In practice is a LabVIEW DLL a C wrapper for each function that invokes the according pre-compiled VI inside the DLL. As such there needs to be some runtime support to load and executed these VIs. This is usually happening inside the according LabVIEW runtime which is launched from the wrapper. Some kind of Münchhausen trick really. However at least in earlier versions of LabVIEW if the platform and LabVIEW version of the compiled DLL was the same as the calling process, then the wrapper invoked the VIs inside the DLL directly in the context of the calling LabVIEW process.
  12. Seems it is again the time to clean out the blog spam.
  13. There is no easy answer to this. As with most things the right answer is: it depends! If your LabVIEW DLL was created with a different version than the LabVIEW version you are running your lvlib in, you are safe. The DLL will be executed in the context of the runtime version of LabVIEW that corresponds with the LabVIEW version used to create the DLL. Your LabVIEW lib is executing directly in the calling LabVIEW system, so they are as isolized from each other as you can get on the same machine instance. However if you load the DLL into the same LabVIEW development system version as was used to create it, things get more interesting. In that case LabVIEW loads the VIs inside the DLL into the same LabVIEW system to save some performance. Loading the DLL into a different LabVIEW runtime requires marshaling of all function parameters across process boundaries, since the runtime system is a different process than your LabVIEW system, which is quite costly. Short circuiting this saves a lot of overhead. But if the VIs in the DLL are not in the same version as the current LabVIEW version, this can not be done as the DLL VIs are normally stored without diagram and can therefore not be recompiled for the current LabVIEW platform. So in this case things get a bit more complicated. I haven't tested so far if VIs inside DLLs would get loaded into a special application context in that case. It would be the best way to guarantee as much of similar behavior as if the DLL had to be loaded into a separate runtime. But it may also involve special difficulties that I'm not aware of.
  14. This does not sound like any LabPython specific issue but a simple basic Python problem. Please refer to a Python discussion forum for such questions. They can be of a lot more assistance to you than I could. When creating LabPython about 15 years ago I knew Python much more from the embedding API than anything else and was just proficient enough in Python itself to write some simple scripts to test LabPython. Haven't used Python in any form and flavor since.
  15. Are you sure NI-IMAQ contains the barcode functions? I thought NI-IMAQ only contains the functions that are directly necessary for getting image data into the computer. The actual processing of images is then done with NI Vision Development Module. And to heng1991, this software may seem expensive but once you have exercised your patience with trying to get external libraries not crash when trying to create an interface in LabVIEW for them, you will very likely agree that this price has a reason. Especially since unless you are very experienced with interfacing to external libraries you are very likely to create a VI interface that may seem to work but will in fact corrupt data silently in real world applications.
  16. DSCheckPtr() is generally a bad idea for several reasons: For one it gives you a false security since there are situations where this check would simply have to conclude that the pointer is valid but it still could be not valid in the context you make the check. Such a function can check a few basic attributes of a pointer such as if the pointer is not NULL, a real pointer already allocated in the heap rather than just an address to some memory location but it can not check if this pointer was allocated by the original context in which you make the check or since been freed and reallocated by someone else. And anything but the trivial NULL pointer check will cost significant performance as the function has to walk the allocated heap pointers to find if it exists at all in there. Windows knows also such a function, which only works if the memory was allocated through the HeapAlloc() function but its performance is notorical and its false security too. Use of this function is a clear indication that someone tried to patch up a badly designed library by adding some extra pseudo security. As to atomic operations in the exported C API of LabVIEW, I'm not really aware of any but haven't checked in 2012 or 2013 if there are new exports available that might sound like atomic cmpxchg(). Even if there were, I find releasing a library that would not support at least 3 versions of LabVIEW not really a good idea. On the other hand with some preprocessor magic it would be not to difficult to create a source code file that resorts to compiler intrinsics where available (MSVC >= 2005 and GCC >= 4.1.4) and implements the according inline assembly instructions for the others (VxWorks 6.1 and 6.3 and MSVC 6 for Pharlap ETS). I could even provide my partly tested version of a header file for this. And if you want to be safe you should avoid using an U8 as lock. SwapBlock() not being atomic as far as I know, has no way to guarantee that another concurrent call to it on an address adjunct to the currently swapped byte would not destroy the just swapped byte, since the CPU generally works on 32 bit addresses. Also avoid the temptation to make any data structure you want to access in such a way packed in memory. Only aligned address accesses to memory will generally be safe from being stomped on by another thread trying to access a memory address directly adjunct to this address. If you can use 32bit locks and assure the 32 bit element is properly aligned in memory, SwapBlock() won't need to be atomic as long as you can guarantee that no concurrent read/modify/write (SwapBlock()) access to the same address will ever happen.
  17. Well the vxWorks based controllers are a bit of a strange animal in the flock. VxWorks uses a lot of unix and posix like functionality but also has quite a bit of deviations from this. I'm not really sure if the Windows like file system is part of this at all, or if the drive letter nomenclature is in fact an addition by NI to make them behave more like the Pharlap controllers. Personally I find it strange that they use drive letters at all, as the unix style flat file hierarchy makes a lot more sense. But it is how it is and I'm in fact surprised that the case sensitivity does not apply to the whole filename. But maybe that is a VxWorks kernel configuration item too, that NI disabled for the sake of easier integration with existing Pharlap ETS tools for their Pharlap based controllers. VxWorks only was used because Pharlap did not support PPC compilation and at that time x86 based CPUs for embedded applications were rather non-existent, whereas PPC was more or less dominating the entire high end embedded market from printers to routers and more. The use of PPCs for Mac computers was a nice marketing fact but really didn't mount up to any big numbers in comparison to the embedded applications of that CPU.
  18. While I'm in the club of trying to avoid crashing whenever possible I find a catch all exception handler that simply catches and throws away exceptions an even less acceptable solution. An exception handler is ok for normal errors where you can have enough information from the exception cause itself to do something about such as retrying a failed operation or such. But it is ABSOLUTELY and DEFINITELY unacceptable for exceptions like invalid pointer accesses. If they happen I want to know about them as soon as possible and have done as little as possible afterwards. As such I find the option in the CLN to actually just continue after such exceptions highly dangerous. There are many people out there who believe that writing less than perfect external code is just fine and just let LabVIEW catch the exception and happily go on. An invalid pointer access or any other error that is caused by this later on (writing beyond a buffer often doesn't cause an immediate error since the memory location is already used by something else in the app and as such completely valid as far as the CPU and OS is concerned) needs to stop the program immediately, finito! There is no excuse for trying to continue anyways. Blame on whoever wrote that crashing code but you do not want LabVIEW to eat your harddrive and what else! If you talk about bits in any form of integer then write access to them is not atomical on any CPU architecture I know off. Even bytes and shorts are highly suspicious even on x86 architecture, since the memory transfer traditionally always happend in 32 bit quantities (and nowadays even in 64 or even 128 bit quantities). Some reading material I consulted suggests that write access to anything but aligned integers (and on 64 bit architectures aligned 64 bit integers) is not guaranteed to be atomical on just about any CPU architecture out there. I can't be sure, and am in fact to lazy to make a long research project about this, so I employ in all my C code cmpxchg() operations when wrriting bits and bytes to structure elements that are not integer aligned or not guaranteed to not share bytes inside the aligned integer adres with other variables (unless of course I can proof positively that there will never be anyone else trying to write to the same integer in any form and in C that means that the routine writing to that address also has to be at least protected or single threaded).
  19. Sigh! I mentioned it is not correct for overlapping buffers. And the Microsoft C Runtime source code is copyright protected so you should not post it anywhere!
  20. That is true, but what do you want to say with that? This is in general what MoveBlock() and memove() is about. Nothing magical at all! void MoveBlock(void *src, void *dst, int32 len) { int32 i; if (src & 0x3 || dst & 0x3 || len & 0x3) { for (i = 0; i < len; i++) *(char*)dst++ = *(char*)len++; } else { for (i = 0; i < len / 4; i++) *(int32*)dst++ = *(int32*)len++; } } This is not the real MoveBlock() implementation. This implementation would cause "undefined" behaviour if the two memory blocks overlap, while the MoveBlock() documentation states explicitedly that overlapping memory blocks are allowed. Basically the real implementation would have to compare the pointers and either start from the begin or end with copying depending on the comparison result. That does not change anything about the fact that a pointer is just a meaningless collections of numbers if it does point to something that is not allocated in memory anymore. Once your CLN returns in your previous example, there is absolutely nothing that would prevent LabVIEW from deallocating the variant, except lazy deallocation, because it determines that the same variant can be reused the next time the VI is executed. But there is no guarantee that LabVIEW will do lazy deallocation and in real world scenarios it is very likely that LabVIEW will deallocate that variant sooner or later to reuse the memory for something else.
  21. That is not going to help much. That pointer is invalid at the moment the Call Library Node returns since the variant got deallocated (or at least marked for deallocation at whatever time LabVIEW feels like). And the third parameter is always an int32 but you need to pass 4 for 32 Bit LabVIEW and 8 for 64 byte LabVIEW to it.
  22. In addition to what Shaun said, there are several potential problems in the current OpenG ZIP code in respect to localized character sets. If you use filenames that use characters outside of the 7 bit ASCI code table the result will be very platform dependent. Currently the OpenG ZIP library simply takes the names as handled by LabVIEW which is whatever MBCS the platform uses at that moment. This has many implications. The ZIP standard only supports local encoding or UTF8, and a flag in the archive entry says what it is. This is currently not handled at all in OpenG ZIP. Even if it was there are rather nasty issues that are not trivial to work out. For one if you run the library on a platform that uses UTF8 encoding by default (modern Linux and MacOSX versions) the pathnames in an archive created on that computer will in fact be UTF8 (since LabVIEW is using the platform MBCS encoding) but the flag saying so is not set so it will go wrong when you move that archive to a different platform. On the other hand on Windows LabVIEW is using the CP_ANSI codepage for all its string encoding since that is what Windows GUI apps are supposed to use (unless you make it a full Unicode application which is a beast of burden on its own even for normal GUI apps and an almost impossible thing to move to in a programming environment like LabVIEW if you do not want to throw out compatibility with already created LabVIEW VIs). CP_ANSI is an alias for the codepage set in your control panels depending on your country settings. pkzip (and all other command line ZIP utilities) traditionally use the CP_OEM codepage, This is an alias for another codepage depending on your country settings. It contains mostly the same language specific characters in the upper half of the codepage than what CP_ANSI does but in a considerably different order. It traditionally seems to come from the IBM DOS times, and for some reasons MS decided to go for once for an official standard for Windows rather than the standard set by IBM. So an archive created on Windows with OpenG ZIP will currently use the CP_ANSI codepage for the language specific characters and therefore come up with very strange filenames when you look at it in a standard ZIP utility. The solution as I have been working on in the past months is something along these lines: On all platforms when adding a file to the archive: - Detect if a path name uses characters outside the 7bit ASCI table. If not just store it as is with the UTF8 flag cleared. - If it contains characters outside the 7bit ASCI range do following: On non Windows and MacOSX systems: - Detect if we are on UTF8 system, if not convert path to UTF8, in all cases set UTF8 flag in archive entry and store it On Windows and MacOSX: - Detect if we are on UTF8 (likely not), if so just set UTF8 flag and store file - otherwise convert from CP_ANSI to CP_OEM and in case of successful conversion store file with this name without UTF8 flag - in case the conversion fails for some reasons, store as UTF8 anyhow When reading, there is not very much we can do other than observing the UTF flag in the archive entry. On Non-Windows systems if the flag is different than the current platform setting we have a real problem. codepage translation under unix is basically impossible without pulling in external libraries like icu. Although their existence is fairly standard nowadays there exist a lot of differences in Linux distributions. Making OpenG ZIP depend on them is going to be a big problem. On VxWorks it is not even an option without porting such a library too. On Windows we can use MultiByteToUnicode and vice versa to do the right thing. On MacOSX we have a similar API that "tries" to do mostly the same as the Windows functions but I'm 100% positive that there will be differences for certain character sets. There still is a big problem since the ZIP standard in fact does only allow for the flag if the names are in UTF8 or not. If they are not, there is no information anywhere as to what actual codepage it is in. Remember CP_OEM is simply an alias that maps to a codepage which depends on your language settings. It is a very different codepage for Western European or Eastern European country settings and even more different than for Asian country settings.
  23. Well dynamic registration, unless you forbid a once registered reader to unregister, makes everything quite a bit complexer. Since then you can get holes in the index array that would then block the writer at some point or you have to do an additional intermediate refnum/index translater that translates the static refnum index that a reader gets when registering into a correct index into the potentially changing index array. I'm not sure this is worth the hassle as it may as well destroy any time benefits you have achieved with the other ingenious code.
  24. Well if you say its function is not interesting, I'm more than happy to believe you. But!!! You didn't comment on the fact if the LvVariant is byte packed, in which case access to the pointers in there would incur quite a performance penalty on at least all non-x86 platforms, or if the structure uses natural alignement, in which case your calculation formula about the size would be in fact misleading. Note: Seems the structure uses at least on Windows x86 the standard LabVIEW byte alignment, that is byte packed. All other platforms including Windows x64 likely will have natural/default alignment. But your documentation is definitely not completely correct. The LvVariant looks more like #pragma pack(1) typedef struct { void *something; // maybe there is still a lpvtbl somehow char flag; // bool void *mData; void *mAttr; int32 refCount; int32 transaction; void *mTD; } LvVariant; #pragma pack() This is for Windows x86. For others I assume the pragma pack() would have to be left away. Also I checked in LabVIEW 8.6 and 2011 and they both seem to have the same layout so I think there is some good hope that the layout stays consistent across versions, which still makes this a tricky business to rely upon.
  25. That may be an interesting approach, but there is no guarantee that the generated C code is binary compatible with what LabVIEW uses itself internally. They are entirely different environments and the CPP generated code is in fact free to use different and simpler datatypes for managing LabVIEW data, than what LabVIEW does use. The CPP generated code only has to provide enough functionality for proper runtime execution while the LabVIEW environment has more requirements for editing operations.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.