Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. Well Win8 RT is for several reasons not an option. Any win8 RT application has to be a full .Net application as it is based on the virtual machine execution of .Net to achieve hardware independence (so it works on Arm, Risc, and x86 CPUs alike). But the Pipes library uses Windows APIs that are prone to very slight platform differences in the kernel. While Windows tries to maintain backwards compatibility as much as possible, this API is exercised infrequently enough that they can let slip a few minor incompatibilities between Windows versions.
  2. I wouldn't be surprised if the pipes offer higher throughput than network sockets. They are implemented in the Windows kernel most likely with some form of shared memory pool that is mapped into both processes. As such they do short circuit quite some overhead compared to going through the Winsock library. However I have no numbers available. As to the latest version of the available code, the earlier link to the CVS repository on sourceforge is indeed the most recent one that is available and more or less working. I did some more trials on this but didn't get any more reliable operation out of it, and also there is a good chance that it might have additional issues on Windows 8. This part of the Windows API is both rather complex and involved and can be influenced by many internal changes to the Windows kernel.
  3. There is not an easy solution. The only proper way would be to implement some DLL wrapper that uses the Call Library Node callback methods to register any session that gets opened in some private global queue in the wrapper DLL. The close function then removes the session from the queue. The CLN callback function for abort will check the queue for the session parameter and close it also if found. That CLN callback mechanism is the only way to receive the LabVIEW Abort event properly in external code.
  4. He just made a post replying to another post of one of his "buddies" asking if they have a trial version. Cute! Maybe Michael needs to add a specific check and reject any post with a link to yiiglo.com, rasteredge.com, and businessrefinery.com. Also interesting to know is that the "Company" link on the side arronlee likes to promote so much does not work at all. Always nice to do (online) business with a company that could literally sit in cloud 7 as soon as you are not happy about something.
  5. Well, on that note I have all versions of LabVIEW installed on my system since about 5.0. A few months back LabVIEW 8.2.1 started to crash on startup but I didn't have any urgent need for that to work so left it at that. I did regularly check if it still crashed to because there was a potential project that might need a minor maintenance work in the near feature. Just before installing LabVIEW 2012 I tested again and it still crashed. After installing LabVIEW 2012 SP1 and the according device driver DVD I tried again and it now worked. And no, I prevented the DADmx driver from removing any support from the LabVIEW 8.2 directory by hiding it (and the other versions, the DAQmx intaller wants to rob of all DAQmx VIs) during the install! So while 2012 may be more stable, the underlaying device drivers can make a much bigger difference.
  6. Those stubs could be the culprit. Your DLLs may in the initialization routine (the code that gets executed automatically when the DLL is loaded into memory) do call some of these stubs expecting certain values or behavior and getting stuck in an endless loop waiting for these to change. Without seeing the DLL source code this is almost impossible to debug though. During the initialization routine of the DLL, even on Windows the system is more or less monopolized for the current process which can result in a very sluggish or even completely locked up system. If you have a chance to look at the source code or talk to the developer of the DLL, make sure they are not doing anything complicated in the DLLMain() function. That is the function called on loading and unloading of the DLL. In fact there are a lot of things you are not allowed to do in there at all, according to MS, one of them is for instance trying to load other DLLs dynamically, as that will have a good chance to lockup your system in a nice deadlock.
  7. No, definitely not! 4*2*2*4 should be the standard and strictly enforced for all LabVIEW programmers, if I had a say in this! And anyone using the 6*4*4*6 for a VI that is not private to the library should be banned from writing LabVIEW programs.
  8. Or it might be that the cell boundary calculation was unnecessarily done in all updates for each cell. I doubt NI would not have some clipping optimization when updating for instance the cell background of many cells that they would even attempt to draw anything on the screen that will not be visible. They do have to go into the right cell and update its attributes accordingly of course so the cell can display correctly when scrolled into the visible viewport. So your optimizations in the past mainly may have reduced the number of times cell boundaries were recalculated . Now with them gotten out of the way your optimizations might not harm but likely won't improve the speed much anymore. And beware of changing the cell height accidentally for one row. That might disable the nice optimization from Christina altogether and get you back to the old situation.
  9. Maybe I was a tadbit to modest here. Thinking about it you are of course right. FGVs are powerful and are easier to learn for someone not knowing much about OOP. The problem is that without some OOP knowledge such a person is likely to either get stuck at the "set/get FGV with a little extra functionality level", or starting to create FGV monsters at least in the beginning. So while the initial learning curve to start using FGVs is fairly easy, doing the real powerful designs is just as a steep learning curve than learning LVOOP, with the difference that LVOOP comes with some tools right in the LabVIEW IDE to ease the more automatic tasks and FGVs generally have to be created each time manually. Also the separation of methods and data is a definitive advantage and thanks to the project integration also easy to manage.
  10. While I certainly also am among the people who should attend the aformentioned LAA group, I do not try to hide these bends. Alignment were it is possible yes, otherwise leave it. I prefer to see that the wire goes indeed to the terminal that it looks like and not some other one, even if that alignment may only be off one pixel. Nothing as frustrating for me but connector panes (pains) that are chaotic or wires going into an icon other than where they really connect.
  11. You should mention that you have posted elsewhere too (NI forum) for this, as that can help people to see if they can add anything useful to the thread, instead of repeating what others already said. Also it is a good way of getting additional references for anyone coming across similar problems in the future and coming here instead of the NI forums.
  12. That is a somewhat strong simplification! Technically you are right, conceptually AE it is a completely upside down way of doing OOP. OOP is about encapsulating the data which the methods can work with while AE is about encapsulating the data AND the methods in one place. The data is always together with the methods which makes things like instantiation a bit problematic. There is also the aforementioned problem of the conpane which is not infinitely expandable. While this is a limit, I haven't found it a limit in the sense that I could not do things I wanted to do. And the side effect is that it makes you think more about extending such an "object". And that is usually always a good thing (except sometimes for project deadlines). As to the code bloat, as soon as you start to do accessor wrappers for the individual AE methods, you go down the same road. AEs only work by discipline from the implementor and the user (unless you wrapped them at which point the AE implementation gets a fact that a user should not interest at all anymore). LVOOP works by certain contracts that the development environment and compiler impose on both the implementor and user of the class. You can make a similar point (albeit only in the aspect of implementing one with the other) between C and C++. You can write object oriented code in C just as well but you have no support by the compiler environment for that. Everything beyond the normal C rules has to be done by discipline of the programmer, rather than by the compiler checking that object classes are indeed compatible and can be casted from one to the other class, just to name an example. Also inheritance is a nice feature in OOP, as it allows to easily implement variations on a theme. At the same time, it is also one of the more abused features in many OOP designs. As soon as you find yourself trying to prop a potato class in a car interface, you should realize that you probably have just created a mutant monster that will eventually chase you in your worst nightmares. Inheritance in an AE context on the other side is simply not feasible. But I would certainly agree that anybody claiming AEs to be generally inferior to classes is simply ignorant. They can be created and used very successfully, if you have your mind properly wrapped around them. I would however hesitate to claim that they are worth to learn at this point instead of LVOOP. As an additional tool in a programmers toolkit they are however still a very valuable and powerful addition to any LabVIEW programmer expertise.
  13. Basically this whole discussion of perceived differences between LV2Global, FGV, Action Engine, or IGV (Intelligent Global Variable) are a bit academic. Traditionally the LV2 style global were the first incarnation of this pattern and indeed in the beginning mostly just with get/set accessor methods. However smart minds soon found the possibiity to also encapsulate additional methods into the LV2 style global without even bothering to find a new name for this. In the over 25 years of LabVIEW use new terms have arisen, often more to just have a new term, rather than describe a fundamentally different design pattern. As such these names are in practice quite interchangeable as different people will tend to use different terms for exactly the same thing. Especially the distinction between FGV/IGV and AE feels a bit artificial to me. The claimed advantage of AE's to have no race conditions is simply by discipline of the programmer, both of the implementer as well as the user. There is nowhere an official document stating "AEs shall not have any possibility to create race conditions" and it would be impractical as that would for instance mean to completely disallow any set and get alike method altogether, as otherwise race conditions still can be produced by a lazy user who rather prefers to implement his algorithm to modify data around the AE, rather than move it into a new method inside. I would agree that LV2style globals are a bit of an old name and usually mean the set/get method, but they do not and have not excluded the possibility to add additional methods to it, to make it smarter. For the rest, FGV, IGV, AE and what else has come up, are often used interchangeably by different persons, and I do not see a good cause in trying to force an artificial difference between them. Daklu wrote: Well it is true there is a limit to the conpane, and one rule of thumb I use is that if the FGV/AE requires more than the 12 terminal conpane (that includes the obligatory error clusters and method selector), it has become to unwieldy and the design needs to be reviewed. I realize that many will say, ohh that additional work to refactor such an FGV/AE when this happens and yes it is work, sometimes quite a bit in fact, but it will also in-evidently result in refactoring parts of the project that have themselves become unwieldy. With OOP you can keep adding more and more methods and data to an object until even the creator can't really comprehend it anymore logically, and it still "works". The FGV has a natural limit which I don't tend to hit anymore nowadays and that while my overall applications haven't gotten simpler. Michael Avaliotis wrote: You bet I do! Haven't digged into LVOOP yet, despite knowing some C++ and quite a bit Java/C#. Daklu wrote: I think it has a lot to do with how your brain is wired. AEs and LVOOP are trying to do similar things in completely contrary ways. I would agree that AE's are not a good solution if you know LVOOP well, but I started with FGV/AEs loooooooong before LVOOP was even a topic that anyone would have thought about. And in that process I went down several times a path that I found to be a dead end, refining the process of creating AE's including to define self imposed rules to keep it all managable for my limited brain capacity. They work for me amazingly well and allowed me often to redefine functionality of existing applications by simply extending some AE's. This allowed to keep the modifications localized to a single component and its support functions rather than have to sprinkle around changes throughout the application. The relatively small adaptions in the interface were easily taken care off since the LabVIEW strict datatype paradigm normally pointed out the problematic spots right away. And yes I'm a proponent of making sure that the LabVIEW VIs who make use of a modified component will break in some ways, so one is forced to review those places at least once to see if there is a potential problem with the new addition. A proper OOP design would of course not need that since the object interface is well designed from the start and never will introduce incompatibilities with existing code when it gets extended . But while that is the theory I found that in OOP I tend to extend things sometimes, only to find out that certain code that makes use of the object will suddenly break in very subtle and sometimes hard to find ways, while if I had been forced to review all callers at the time I added the extension I would have been much more likely to identify the potential problem. Programming AEs is a fundamentally different (and I certainly won't claim it to be superior) paradigm to LVOOP. I'm aware that it is much less formalized, requires quite some self discipline to use properly, but many of my applications over the years would not have been possible to implement in a performant way without them. And as mentioned a lot of them date from before the time when LVOOP would even have been an option. Should I change to LVOOP? Maybe, but that would require quite a learning curve and maybe more importantly relearning quite a few things that work very well with AE but would be quite a problem with LVOOP. I tend to see it like this: Just like with graphical programming vs. textual programming, some brains have a tendency towards one or the other, partly because of previous experience, partly because of training. I trained my brain over about 20 years in programming AEs. Before I could program the same functionality in LVOOP as I do nowadays in an AE, would require me quite a bit more than weeks. And I still would have to do a lot of LVOOP before I would have found what to do and what to avoid. Maybe one of the problems is that the first time I looked at LVOOP turned out to be a very frustrating experience. For some reasons I can fairly easily accept that LabVIEW crashes on me because of errors I did in an external C component, but I get very upset if it crashes on me because I did some seemingly normal edit operation in the project window or such.
  14. I compiled a version but that crashed, so left it at that for the time being. No use in releasing something that does not work. Should get some better test setups soon so that debugging will work more easily. And the Pipes library was never officially released, so we never ported it over from the CVS repository to the SVN one. It's still in the CVS only.
  15. As with all OpenG sources, they are on the OpenG Toolkit project page on sourceforge. All of them!
  16. "Never" would seem a very strong statement to me. See the OpenG LabPython, LVZIP and Pipe library just to name a few. It seems the person having done the vxcan API wrapper did indeed "forget" to add the C code to the download, especially since that wrapper doesn't really consist of any magic at all, but simply some C to LabVIEW parameter mapping. I fully understand that providing multiple platform wrappers can be a real pain in the ass, which would make it a good idea to add the C source of those wrappers, so others can recompile for new platforms, but doing everything on the LabVIEW level is not a maintainable solution in the long run at all. Usually APIs are anyhow different enough between platforms that a pure LabVIEW wrapper gets a real pain to do, such that it works on multiple platforms, unless the API developer kept in mind to keep the API binary consistent between platforms.
  17. Unless you want to hack into the import library wizard VI code yourself (and create a maintenance nightmare since there is no publically documented VI API so far AFAIK) I don't believe there is currently an option. And the command line approach does not seem to me the ideal way of creating such an interface, since the import library wizard potentially requires an entire page of possible command line parameters if you consider things like header directories, defines, etc.
  18. That is not cheating but the proper course of action unless you enjoy playing C compiler yourself and create a badly maintainable VI.
  19. Shaun, in theory you are right. In practice is a LabVIEW DLL a C wrapper for each function that invokes the according pre-compiled VI inside the DLL. As such there needs to be some runtime support to load and executed these VIs. This is usually happening inside the according LabVIEW runtime which is launched from the wrapper. Some kind of Münchhausen trick really. However at least in earlier versions of LabVIEW if the platform and LabVIEW version of the compiled DLL was the same as the calling process, then the wrapper invoked the VIs inside the DLL directly in the context of the calling LabVIEW process.
  20. Seems it is again the time to clean out the blog spam.
  21. There is no easy answer to this. As with most things the right answer is: it depends! If your LabVIEW DLL was created with a different version than the LabVIEW version you are running your lvlib in, you are safe. The DLL will be executed in the context of the runtime version of LabVIEW that corresponds with the LabVIEW version used to create the DLL. Your LabVIEW lib is executing directly in the calling LabVIEW system, so they are as isolized from each other as you can get on the same machine instance. However if you load the DLL into the same LabVIEW development system version as was used to create it, things get more interesting. In that case LabVIEW loads the VIs inside the DLL into the same LabVIEW system to save some performance. Loading the DLL into a different LabVIEW runtime requires marshaling of all function parameters across process boundaries, since the runtime system is a different process than your LabVIEW system, which is quite costly. Short circuiting this saves a lot of overhead. But if the VIs in the DLL are not in the same version as the current LabVIEW version, this can not be done as the DLL VIs are normally stored without diagram and can therefore not be recompiled for the current LabVIEW platform. So in this case things get a bit more complicated. I haven't tested so far if VIs inside DLLs would get loaded into a special application context in that case. It would be the best way to guarantee as much of similar behavior as if the DLL had to be loaded into a separate runtime. But it may also involve special difficulties that I'm not aware of.
  22. This does not sound like any LabPython specific issue but a simple basic Python problem. Please refer to a Python discussion forum for such questions. They can be of a lot more assistance to you than I could. When creating LabPython about 15 years ago I knew Python much more from the embedding API than anything else and was just proficient enough in Python itself to write some simple scripts to test LabPython. Haven't used Python in any form and flavor since.
  23. Are you sure NI-IMAQ contains the barcode functions? I thought NI-IMAQ only contains the functions that are directly necessary for getting image data into the computer. The actual processing of images is then done with NI Vision Development Module. And to heng1991, this software may seem expensive but once you have exercised your patience with trying to get external libraries not crash when trying to create an interface in LabVIEW for them, you will very likely agree that this price has a reason. Especially since unless you are very experienced with interfacing to external libraries you are very likely to create a VI interface that may seem to work but will in fact corrupt data silently in real world applications.
  24. DSCheckPtr() is generally a bad idea for several reasons: For one it gives you a false security since there are situations where this check would simply have to conclude that the pointer is valid but it still could be not valid in the context you make the check. Such a function can check a few basic attributes of a pointer such as if the pointer is not NULL, a real pointer already allocated in the heap rather than just an address to some memory location but it can not check if this pointer was allocated by the original context in which you make the check or since been freed and reallocated by someone else. And anything but the trivial NULL pointer check will cost significant performance as the function has to walk the allocated heap pointers to find if it exists at all in there. Windows knows also such a function, which only works if the memory was allocated through the HeapAlloc() function but its performance is notorical and its false security too. Use of this function is a clear indication that someone tried to patch up a badly designed library by adding some extra pseudo security. As to atomic operations in the exported C API of LabVIEW, I'm not really aware of any but haven't checked in 2012 or 2013 if there are new exports available that might sound like atomic cmpxchg(). Even if there were, I find releasing a library that would not support at least 3 versions of LabVIEW not really a good idea. On the other hand with some preprocessor magic it would be not to difficult to create a source code file that resorts to compiler intrinsics where available (MSVC >= 2005 and GCC >= 4.1.4) and implements the according inline assembly instructions for the others (VxWorks 6.1 and 6.3 and MSVC 6 for Pharlap ETS). I could even provide my partly tested version of a header file for this. And if you want to be safe you should avoid using an U8 as lock. SwapBlock() not being atomic as far as I know, has no way to guarantee that another concurrent call to it on an address adjunct to the currently swapped byte would not destroy the just swapped byte, since the CPU generally works on 32 bit addresses. Also avoid the temptation to make any data structure you want to access in such a way packed in memory. Only aligned address accesses to memory will generally be safe from being stomped on by another thread trying to access a memory address directly adjunct to this address. If you can use 32bit locks and assure the 32 bit element is properly aligned in memory, SwapBlock() won't need to be atomic as long as you can guarantee that no concurrent read/modify/write (SwapBlock()) access to the same address will ever happen.
  25. Well the vxWorks based controllers are a bit of a strange animal in the flock. VxWorks uses a lot of unix and posix like functionality but also has quite a bit of deviations from this. I'm not really sure if the Windows like file system is part of this at all, or if the drive letter nomenclature is in fact an addition by NI to make them behave more like the Pharlap controllers. Personally I find it strange that they use drive letters at all, as the unix style flat file hierarchy makes a lot more sense. But it is how it is and I'm in fact surprised that the case sensitivity does not apply to the whole filename. But maybe that is a VxWorks kernel configuration item too, that NI disabled for the sake of easier integration with existing Pharlap ETS tools for their Pharlap based controllers. VxWorks only was used because Pharlap did not support PPC compilation and at that time x86 based CPUs for embedded applications were rather non-existent, whereas PPC was more or less dominating the entire high end embedded market from printers to routers and more. The use of PPCs for Mac computers was a nice marketing fact but really didn't mount up to any big numbers in comparison to the embedded applications of that CPU.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.