Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,872
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. QUOTE (Tomi Maila @ May 16 2008, 12:37 PM) I'm still not sure I can see the need for ms and frame accuracy on playback. However what we did so far was synchronization and combined storage of video and data acquisition http://www.citengineering.com/pagesEN/products/sdvd.aspx' rel='nofollow' target="_blank">Synchronized Video & Data Acquisition Device in order to have time accurate life measurements such as blood pressure, ECG and similar together with the actual video recoding of the operation so that these things can later be exactly localized in relation to the actual action taken at that moment. However playback of this video together with the data is usually not really in real time and definitly not in strict realtime as the researcher normally wants to go through the interesting sections in real slow motion. Rolf Kalbermatter
  2. QUOTE (Tomi Maila @ May 15 2008, 10:22 AM) Er! The big question here is for what is this good? The human eye has a very limited time resolution so what is it that makes you or your customer believe that the actual display of every single frame to a very accurate time position is so important and not just the overal speed of the movie to the original timeline? Basically Windows is not real time and any other desktop OS neither. So they are more or less inherently unable to guarantee a whole video frame being transmitted every 40ms (25 frames per second) accurately to a time scale of only a few ms. So any normal video playing software just simply synchronizes the video stream timeline continously to the actual time, skipping frames whenever appropriate. On lower level (for instance when you control the Quicktime API directly but DirectX/Direct Play surely has similar capabilities) you can opt for frame accurate display instead of time accurate display but that usually means that the timeline of the playback is not synchronized with the original timeline anymore as it sooner or later starts to have a time lag. I do not see any way to guarantee both frame and time accurate display of movie material on non-dedicated hardware other than simply buying the greatest and highest performance hardware components, installing a hardware video decompressor that supports your video compression and preferably has a direct link (crosswire or dedicated PCIe channel) and the meanest and leanest OS you can possible get your hands on and keep your hands crossed that no system interrupts such as network traffic or other DMA transfers will mess up your timing anyhow. Now with dedicated hardware such as embedded systems with specially optimized RT OSes for media solutions this might be a different story. Rolf Kalbermatter
  3. QUOTE (marp84 @ May 16 2008, 03:41 AM) You need the Professional Version of LabVIEW or the Application Builder add-on in order to do that. Then read the User Manual about how to go about creating an executable. Rolf Kalbermatter
  4. QUOTE (Gary Rubin @ May 15 2008, 10:48 AM) Now you are exagerating a bit . I mean I've seen those "drivers" and they usually come from companies producing some hardware and wanting to make it available to LabVIEW users but they do not have a professional LabVIEW programmer and sometimes even just use the evaluation version of LabVIEW to create their drivers. It's in general a very bad idea to do since the technical support requests those companies create in such a way is huge and they have obviously no resources to support that. Which depending on the customer means: he is writing his own driver or abandones LabVIEW or the hardware in favor of a different product -> both cases result in a dissatisfied customer. Now I do write VI libraries too and develop "drivers" regularly. Some of them are openly available, some even free and I would hope that those libraries/drivers would not fall under your category of poorly written "LabVIEW SDKs". They definitly almost never use sequences and if they do it is for data dependency only and nothing else. That there are people that want to still rewrite them may be true but I would like to think that that has more to do with the "Not invented here" syndrome than anything else and I have to admit that I have been going down that path at times in the past too. Rolf Kalbermatter
  5. QUOTE (tengels @ May 14 2008, 07:52 AM) What is the serial interface? A USB to Serial converter? If so it may be a problem in its driver which VISA does not know how to deal with properly. I've seen strange behaviour with several USB to Serial adpaters in the past. Rolf Kalbermatter
  6. QUOTE (crelf @ May 14 2008, 01:41 PM) Never saw the white so far and for the Call Library Node it wouldn't make sense anyhow. LabVIEW can not determine if an external shared library or particular functions in it are reentrant safe. The programmer defines that in the Call Library configuration dialog (and if he says it is reentrant the according function better is or you are in for strange to rather nasty effects). RolfKalbermatter
  7. QUOTE (Michael_Aivaliotis @ May 15 2008, 03:12 AM) What is best and what not is very discutable. LVOOP is most probably not such a bad thing but such a driver restricts its use to applications that can and will use LVOOP. Also just as with normal VI libraries the usefullness and ease of use depends greatly on the person implementing that class. You can make a mess with (LV)OOP just as easily as with normal VI library interfaces and in fact even easier since you need to understand OOP fairly well to really deliver easily reusable class implementations. I'm sure this is biased by experiences with some C++ code which can be sometimes called horrible to understand at best but it's nevertheless a real experience and is also a result of my mind which likes visual representation very much but has much more affinity with a functional interface than with some of the more complex OOP design patterns. Rolf Kalbermatter
  8. QUOTE (BrokenArrow @ May 13 2008, 01:41 PM) Not sure about shared variables but TCP can be made fast in LabVIEW. And you do not even need to go on raw socket level. Just get a small VI from the NI site to disable the Nagle algorithme for a TCP network refnum and you are done without delays for small data packets making command-acknowledge type protocols getting slow. As to being compiled, as far as LabVIEW is concerned there should be little difference between development system and runtime system performance. If there is a big improvement the application builder would have to do something on the SV engine level that would be very spooky at best. Rolf Kalbermatter
  9. QUOTE (Gary Rubin @ May 13 2008, 01:56 PM) I don't think you can draw the line that strictly. Very strictly speaking the device driver is nowadays the piece of software that translates user application level requests into hardware specific cammands and address accesses. And that piece has to reside inside the kernel as kernel mode device driver since that is the only way to directly access hardware in nowadays protected mode OSes. However talking to that kernel device driver directly is tedious at best so they usually come with a DLL that provides an easier to use API and can be considered part of the driver as well. But with that I do not see any reason to exclude the LabVIEW VIs that access that API as being part of the driver either. After all they translate the not so easy to use DLL calls into something that can be used much more easily in LabVIEW. And once you are there why not qualify any collection of VIs that translates access to some form of hardware in something more LabVIEW friendly as a driver too? I wouldn't go as far as calling VIs to access the normal OS API as drivers though, but that is an entirely arbitrary and subjective classification on my part. Rolf Kalbermatter
  10. QUOTE (maak @ May 12 2008, 02:29 AM) It's good practice to always use brackets around table names and column identifiers. This will not only catch reserved words but also identifiers that have embedded spaces, which for SQL is a syntax separator otherwise. Rolf Kalbermatter
  11. QUOTE (Ami @ May 12 2008, 04:00 AM) You are likely running into threading limitations. LabVIEW allocates by default 4 threads per execution system and for version 8.5 per CPU core and when executing external code that suspends execution the calling thread is blocked until the external code returns. As long as you stay in LabVIEW altogether LabVIEW will attempt to schedule multiple code sequences to run in parallel even if the 4 threads do not satisfy the need of LabVIEW directly but once in external code LabVIEW has no way of gaining control back from that thread to keep your program working among multiple external calls. Solution would be to avoid blocking calls to external code altogether, or disperse the different calls into different subVIs and assign them to different execution systems, or increase the number of allocated threads to your execution system in threadconfig.vi. These recommendations are in declining order of receommendation as they will be more complicated to setup and maintain in the long run and the last one will eventually burdon the system with a load that may bring it to a halt. Rolf Kalbermatter
  12. QUOTE (BrokenArrow @ May 9 2008, 12:59 PM) Flush Buffer will delate data already in either the receive or transmit buffer. Discard Event will discard any events that might have already been queued for the current refnum. That are two very different things. Rolf Kalbermatter
  13. QUOTE (kawait @ May 9 2008, 05:04 AM) Unfortunately this does not specify the calling convention! You would need something like __stdcall in those two declarations too. If you are using Visual C however you are most likely using stdcall already since this is the default calling convention used by 32Bit Microsoft C compilers. To be sure check in the compile conficuration for your project or files what the default calling convention is. Rolf Kalbermatter
  14. QUOTE (netta @ May 8 2008, 12:59 PM) I usually do this by creating a Top Level.vi and putting in whatever dynamic VIs I have. Masscompile won't help since LabVIEW unloads VIs during masscompile as soon as they are not used anymore. The project doesn't help either because the VIs in a project are not yet loaded. Rolf Kalbermatter
  15. QUOTE (Gavin Burnell @ May 7 2008, 06:00 PM) A question of tweaking and benchmarking by the LabVIEW developer team. The general reasoning is like this: A too small value will limit the ability of LabVIEW to offload some of the thread management to the OS. A to high value will cause performance degradation due to increased management overhead by the OS. So 4 is probably a good compromise in allowing LabVIEW to map several code "clumps" directly to OS threads without having to schedule code clumps to much itself and the extra overhead the OS will have in managing multiple threads effectively. In terms of number of allocated threads per process LabVIEW is really one of the heavier heavy weights of most typical applications. Rolf Kalbermatter
  16. QUOTE (netta @ May 6 2008, 08:51 AM) It uses whatever the OS is able to provide and does not know the difference of physical memory or virtual memory at all since that is managed all by the OS. However there is a default limit of 2GB memory per process for all 32 bit OSes independant of the actual physical memory available. Most OSes can be switched with an option to allow the OS to give an application up to 3GB of memory but for more you do need true 64 bit OS and application support. Rolf Kalbermatter
  17. QUOTE (kawait @ May 6 2008, 10:05 PM) Well you always have to allocate the space in the caller for C type pointers. Nothing wrong with that. In principle there is also no difference in passing a byte array or a string. Memory wise they are the same. But the way you do it now you will receive a byte array from the dll that is exactly 100 bytes long independant of how many characters the DLL filled in. The Byte Array to String function will not change anything about that so you always end up with a string of 100 characters with the desired characters in the beginning and NULL characters in the rest of the string. The Call library Node does however have special treatment for parameters that are configured to be C style string pointers. On return LabVIEW will scan the string and search for the terminating NULL character and then resize the string to only be of that size. So by moving the Byte Array to String function before the Call Library Node you do both allocate the size of it (it's a lot easier to allocate a Byte Array using Initialize Array than putting a string constant on the diagram that contains the necessary amount of spaces or whatever) and on return you actually already get the string properly resized to the real string. Please note that this resizing will only shrink the string buffer. You do need to make sure that you pass in an array that is positively and under any circumstances big enough to receive the actual string. QUOTE BTW, after I made my post here, I seemed to find out some big mistake. The explicitly call the cleanup function in my VI seemed the one which causes trouble. After I have removed the explicit call to the cleanup function my VI doesn't crash at all after almost 10 run cycles (I cant say that it wont crach though). I know it is just because I have done the cleanup of some resource too early in my DLL cleanup function and I will take further look to that. That is the difficulty! No crash does not mean it is already working perfect. It may still just destroy non-vital data that could only show up as crash when you end LabVIEW, since it tries to dealocate resources you happen to have destroyed or it may be as subtle as only visual or non-visual corruptions to your actual VIs that may trigger many days later. Rolf Kalbermatter
  18. QUOTE (xinbadaz @ May 6 2008, 01:47 AM) This is a bug in the type library for above Active X component. That typelibrary contains a directory entry that points to the same directory as the type library itself but the help file name in there is only the file name itself and no subdirectory. It's not a LabVIEW fault at all since LabVIEW should not have to worry about the language ID of help files and accordingly not have to search in subdirectories of the expected location for that file but the typelibrary should point at the right locations directly. As to how to let LabVIEW know how to find the help file there is not so much you can do since editing the typelibrary is not really an option. I would just copy said help file to where the typelibrary says it is and be done with it or upgrade to a newer office version if I would use Excel over Active X at all. Rolf Kalbermatter
  19. QUOTE (kawait @ May 5 2008, 09:17 PM) Lot's of questions and little hard facts to check and look at. A few comments though: 1) What I usually do is having one or more exported function to allocate resources (my DLL typically can handle multiple connections/resources) and also according functions to release those resources. All resources are also stored in a global list and on DLL_PROCESS_DETACH I check this list and deallocate anything that might still be in there. I normally never do resource allocations in DLL_ROCESS_ATTACH but instead make sure that the functions do check and if necessary allocate whatever they need at runtime only. 2) If you are positive that the types are compatible it really doesn't matter. Those nice extcode.h datatypes are for clarity for the programmer and to allow LabVIEW to define them to whatever a specific compiler may need to but in nowadays 32bit only world these are fairly consistent across compilers and plattforms. 3) Not really clear to me. But of course for parameters that need to return something to the caller you always need to pass them by reference (as pointer). You can also convert the Byte Array to a string and pass it as C string pointer directly. This has the added bonus that LabVIEW will search for the NULL terminating character on return and pass only the valid part of the string further. Never ever use DSNewPtr or any of the other *Ptr memory manager functions for DLL functions parameters that are passed from LabVIEW or returned to LabVIEW. Those parameters are either skalars, C pointers or LabVIEW data handles. For skalars there is nothing special, for pointers they have to be allocated in the caller (e.g. Initialize Array). The only parameters passed from and to LabVIEW that can be modified in terms of memory allocation are handles and for that you will need the according handle functions of the memory manager. Although for most cases I really would recommend to use NumericArrayResize whenever possible. 4) No! LabVIEW is definitly not more or less stable in calling external code routines in just about any version since 5.x or so. What you see is that in an executable the memory layout of your application is of course different so when your wrongly configured function parameters or your wrapper DLL happen to step on invalid memory (invalid in the sense that they shouldn't try to write to it nor even access it at all) they will overwrite sometimes vital LabVIEW runtime data and sometimes just some LabVIEW edit data. If overwriting LabVIEW runtime data, this sooner or later will cause big trouble. It could crash as early as when doing the writing or as late as when exiting the application since LabVIEW tries to deallocate resources that have been invalidated by the overwriting. Overwriting LabVIEW edit data or such will possibly only be detected when you happen to open for instance a front panel or diagram and try to do some modifications. It could be just some strange text somewhere or weird attributes of objects or LabVIEW could be tripping then over the non-sensical data and go belly up at that point. Rolf Kalbermatter
  20. QUOTE (Jim Kring @ May 4 2008, 01:02 PM) Actually I do but never used it so far. Reason for me is that it is in fact still unoptimized since you first create the entire array and then build it into the really desired array although this last step is happening rather optimal. In a lot of cases I end up doing more complex algorithmes anyhow that avoid the data memory fragmentation caused by the Build Array inside the case structure altogether. Happened to help me reduce the runtime of a particular function at some point from about 50 seconds to far less than 1 second at one time. Of course writing that function took me also a bit more than the inverse proportional time of what the first approach with simple Build Array in a case structure had taken me. But considering that my customer has probably saved in this way I don't know how many hours of waiting for the calculation result since, I feel this was well invested development time and it was fun and educational too to do it. The OpenG function would likely have been only slightly slower but would have taken up temporarely more than double the memory of my approach and that was at that time not unconsiderable in relation to the available physical memory. Whenever I hear someone swear at how bad the speed of programs written in LabVIEW is I just smile and think about this. It's not that LabVIEW is slower than C in most cases or that it is that much harder to write well performing algorithmes in LabVIEW but the simple fact that it is a lot easier to write an algorithme in LabVIEW at all. Sometimes those algorithmes end up in a way that a C programmer would not even think about to do because he needs to deal with every memory allocation anyhow so is likely to look for an algorithme where he does not need do this all over in his code over and over again. But the build in feature as suggested by the OP would be even more optimal than the OpenG function although of course not as optimal as the unconditional For loop auto indexing. If I would be a LabVIEW engineer I would make that conditional auto indexing generate the same machine code as what is used for while loops. There the intermediate array is started of with some intial size and then whenever it gets to small the currently allocated size is doubled. At the end of the loop the array memory is resized to its really used size once more. This is the most optimal approach in terms of memory (re)allocations and data copying for generic situations where you do not know the finally needed size beforehand. And one last note: I wonder when there will be a patent filed by NI for exactly this . From my understanding of patents it would not be valid since this idea has been published here prior to such a patent filing but who would be going to court with this? Rolf Kalbermatter
  21. QUOTE (MartinGreil @ May 2 2008, 10:30 AM) It would mean that every autoindexing output tunnel in a loop has an (optional) boolean. Interesting idea but Ohhhh so unintuitive! One more of those features even LabVIEW cracks would only discover by accident after years of programming. Rolf Kalbermatter
  22. QUOTE (ragglefrock @ May 3 2008, 11:44 AM) My understanding of this is that LabVIEW does not do any optimization across VI boundaries. So Call By Reference and subVI should not make any difference. The subVI is assumed to reuse the data. If it doesn't the data is deallocated. That is why it is a good idea to wire large data always completely through a subVI (so no entering in one case and leaving only in one case but entering outside of any case and being wired through every single case frame and then to the indicator terminal outside any case again) even if you do not modify it inside the subVI. LabVIEW only knows about nodes (built in icons usually with white yellow background) if they reuses input data or allocate a copy. For nodes reusing the data the same rules apply as for subVIs but for nodes not using the input area in any way after they have finished execution LabVIEW makes sure to order execution in such a way that those nodes get executed first if no other data dependency prevents that, in order to allow saving memory copies. Rolf Kalbermatter
  23. QUOTE (raul70 @ Apr 29 2008, 07:33 AM) One other thing I see is the command string to set the ESE and SRE registers. You seem to have them separated with a line feed only. Maybe the Keithley accepts that but it is standard usage to separate several commands in one command string with semicolons from each other. It could just as well be that the Keithley device only sees the ESE command but not the SRE command in this way. Rolf Kalbermatter
  24. QUOTE (crelf @ Apr 30 2008, 06:58 AM) I feel completely misunderstood . I always thought to be part of the 0.3 % that understands all and everything. Rolf Kalbermatter
  25. QUOTE (Yen @ May 2 2008, 05:30 AM) Yen I think you take it a little to literal. Critisizing the massive amount of time our western society spends consuming more or less informative or useful TV does not mean that one wants to say TV nor doing nothing is bad at all. But on the other hand a large part of TV consumation is not very productive in any way other than keeping some people of the streets (which in itself could be a welcom cause for some although not a sufficient one for me). No human can be fully active 24 hours a day without getting a burn out pretty fast. But I do believe that lots of things that should actually be done are not getting done because it is so much easier to sit on a sofa and watch some more or less stupid repetition of what has been shown 100 of times before. I can understand to some extend that someone wants to watch Big Brother once or twice to see what it is about. I can not bring up much understanding that this kind of program is being repeated times after time on I don't know how many channels since many years and people still keep watching. It's this kind of TV that makes me support many of Clay Sharky's statements. And I do believe that something has to change. And his message may help to wake up some and realize that life is not just about sitting in front of a TV screen and consumation in general and the necessary devil of work to support that, but that there is a lot more to it than that. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.