Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. QUOTE (BrokenArrow @ Jun 3 2008, 09:11 AM) Actually it probably depends on the chosen font. Unless you use a TrueType (TT) font Windows will not scale fonts smoothly in one step increments. Those non TT fonts are not defined by glyphs but by bitmaps instead and they do only exist in a discrete amount of sizes. Windows does not attempt to scale bitmap fonts in one step increments because the result would be VERY bad looking. Real TT fonts allow almost seemless scaling to just about any size. LabVIEW does not have anything to say about that. It specifies the Font Name and the size and attributes and Windows does whatever it thinks it can do. LabVIEW has virtually no further control about that other than querying the size of the resulting font to adapt its numeric controls to it. QUOTE (jdunham @ Jun 3 2008, 03:47 PM) I looked at your image. I think the fonts are exactly the same, but they are rendered to a different pixel size in Vista than they were in XP. Remember that integers can only be one line, so labview will always resize the numeric for the specific font size. For strings, the control itself does not resize automatically. As a test, select all of those objects and change the font size to 8 or 9 or something. The numeric array will get a lot smaller, but the string arrays won't change, even though their fonts do. No! the problem is that numeric controls adapt their height to the font applied to the number inside wheras strings do not do that. This is in fact a copying of Windows control behaviour which NI better would have left out IMHO. You can also see that you can not resize numerics in height but only in length whereas strings can be sized to any height independant of the font they display in. Rolf Kalbermatter
  2. QUOTE (crelf @ Jun 2 2008, 03:59 PM) Nope that window does not have any title. Rolf Kalbermatter
  3. QUOTE (normandinf @ Jun 2 2008, 04:46 PM) Works nice! But only as long as there is one single ActiveX/.Net control on the panel. Rolf Kalbermatter
  4. QUOTE (Tomi Maila @ May 29 2008, 07:27 AM) First saying that LabVIEW does not have a memory manager is a bit of a stretch. It's not a garbage collector memory manager like Java has and consequently requires the application to be secure about memory allocation/deallocation to avoid memory leaks during operation but there is nevertheless a layer between LabVIEW and the crt memory allocation routines that I would consider a sort of memory manager. It used to be a lot smarter in old days with help of a memory allocator called Great Circle to compensate for the inadeqacies of what Windows 3.1 could provide for memory management. The behaviour you see is quite likely a feature. I come to this conclusion because of two reasons. First it's behaviour is similar to how LabVIEW uses memory for data buffers when calling subVIs. This memory is also recycled and often not really deallocated. Also the fact that Request Deallocation cleans it up would definitly speak against a leak. Leaks are memory whose reference the application has lost for some reason. This seems not to be the reason here. The Variant most likely keeps the array handle and on negative resizing simply adjusts the dimSize without invoking the memory manager layer to resize that handle. An interesting test would be to see what happens if the small variant does not contain 0 elements but a few instead. Because I could imagine that on an incoming 0 size (or maybe very small size array) the existing internal buffer is reused (with copying of the incoming data to the internal buffer for small sizes) but on larger sized arrays the incoming handle is used instead and the internal handle gets really deallocated. Rolf Kalbermatter
  5. QUOTE (BrokenArrow @ May 27 2008, 11:46 AM) Sorry haven't used it but MAX definitly won't see those boards at all. MAX is an NI application and supports NI hardware only (apart from external devices such as GPIB and PXI boards in an NI PXI rack). Rolf Kalbermatter
  6. QUOTE (Neville D @ May 23 2008, 11:18 AM) That is the VISA passport for HPIB boards if I'm not mistaken. Rolf Kalbermatter
  7. QUOTE (crelf @ May 22 2008, 07:38 AM) Well there might be all kinds of ramifications. For instance NI acquired a source code license to FlexLM for certain OSes (most probably Windows only) and certain uses (most probably allowing to protect their software with it but not allowing them to create a licensing Toolkit for use by people outside of NI). Macrovision (seems they are now Acresso) has been a well known company in copy protection business and I'm sure they employ quite capable lawayers too, so if I was NI I wouldn't try to go beyond what the acquired license allows, which I believe has certainly its price even with a non royality free, non free to use for everything license . Rolf Kalbermatter
  8. QUOTE (Michael_Aivaliotis @ May 21 2008, 10:54 PM) Hmm, why makes clicking on that image give me a download box for a file accessmacro.zip? The link showing in the status bar points definitely to a jpg image. Rolf Kalbermatter
  9. QUOTE (crelf @ May 20 2008, 02:42 PM) Let's put it like this: The FlexLM core in LabVIEW and other NI products is an extensible system. However the way it is build into LabVIEW it assumes a specific secret key to sign licenses. So in order to generate your own licenses you would need to know that key and NI certainly will not publish it since it is secret. And even if you knew it, use of it would likely be against one or more laws to protect copyright such as the DMCA and probably some others. Rolf Kalbermatter
  10. QUOTE (Yen @ May 20 2008, 01:12 PM) Well mine is 1280x800. Put it on my age and the fact that my eyes aren't as sharp anymore . The sreen resolution problem I usually solve by making my panels scalable anyhow. Not the LabVIEW autoscaling, mind you! Just some own scaling so that specific parts on the screen scale while others stay put relative to the rest. Rolf Kalbermatter
  11. QUOTE (eaolson @ May 19 2008, 02:19 PM) I'm not sure where you read about atheism being a religion in the post you replied to. I read about it being a belief and I have to agree with that. Nobody can proof there is a deity nor that there isn't, so even atheists believe in something Rolf Kalbermatter
  12. QUOTE (Yen @ May 19 2008, 01:12 PM) Indeed. LabVIEW here only checks the OS error after doing socket calls and translates that error into it's own error number. And it was decided that a more granular error reporting is the good thing to do and I agree with it. Another angle might be that the OP is not really interested into the actual connection at all but rather into the information if the network node is reachable at all. This is solved easily with a network ping as described in following http://forums.ni.com/ni/board/message?board.id=170&thread.id=70801&view=by_date_ascending&page=1' target="_blank">link. I include a copy of the fixed version of that library. There is however one principal problem with LabVIEW's multithreading and the error reporting as done with sockets: The WSA error is maintained on a thread specific global variable and for the blocking call when waiting on an answer from the remote node and in case of parallel execution of this utility in general this causes a small issue. So for this select call the reported error in case of a failure never will be the real error since the select call happens in one of the threads inside a multi threaded execution systems while the error retrieval will occur in the UI thread. Also if you run multiple ping calls in parallel, the error from one call may get overwritten by the execution of another call before the first VI had a chance to retrieve it's associated error. The only real fix for this would be to write a wrapper DLL that does the actual socket call and error retrieval together in one function per different socket call (since the actual call to an external code routine is guranteed by LabVIEW to be atomic in terms of the actual calling thread inside LabVIEW. I would like to add that the VIs are LabVIEW 7.0 and the original code is from m3nth on the NI forums. Rolf Kalbermatter
  13. I've got a Dell Latitude D830 with 15.4" display and generally like it. In the office I use a docking station and an external 22"wide screen display too. Don't have to do lot of work on the road but I do use the laptop only mode at home and find it quite workable with this 15 inch screen. But I chose on purpose for a low 100 dpi instead of the maximum available 148 dpi because I hate those super tiny fonts and even more the Windows hack of Big fonts to work around that and it also matches the resolution of normal desktop LCD monitors of 90 dpi better. Rolf Kalbermatter
  14. QUOTE (gmart @ May 16 2008, 04:44 PM) Well it's not about for example Visual Studio or not but about if that IDE uses the SCC API. That API was specifically designed around the strict check in/check out philosophy and accordingly uses, enforces and even requires it. And that does not work well with SVN. Visual Studio certainly is not a very good IDE to use with SVN since it does really on the SCC API for its Source Code Control integration. As others have said there are other IDEs that are a lot more flexible in how they interface to SCC systems and that work a lot better with SVN than Visual Studio. Rolf Kalbermatter
  15. QUOTE (mkaravitis @ May 16 2008, 04:56 PM) Callback functionality is not supported byt the Call Library Node. The most simple solution is to write a wrapper DLL in C that provides that callback function translating the callback event into a LabVIEW occurrence or LabVIEw user event and a LabVIEW callable function to install that callback function. All in all not something you are likely to solve without some good C programming knowledge. Rolf Kalbermatter
  16. QUOTE (Michael_Aivaliotis @ May 16 2008, 01:10 PM) My thinking exactly. However quite some of those manufacturers compete with NI mainly on one base and that is being cheaper than NI so I guess that leaves not much space to spend money for real software development especially since NI hardware has been getting more competitive in price too in the last years. Rolf Kalbermatter
  17. QUOTE (Tomi Maila @ May 16 2008, 12:37 PM) I'm still not sure I can see the need for ms and frame accuracy on playback. However what we did so far was synchronization and combined storage of video and data acquisition http://www.citengineering.com/pagesEN/products/sdvd.aspx' rel='nofollow' target="_blank">Synchronized Video & Data Acquisition Device in order to have time accurate life measurements such as blood pressure, ECG and similar together with the actual video recoding of the operation so that these things can later be exactly localized in relation to the actual action taken at that moment. However playback of this video together with the data is usually not really in real time and definitly not in strict realtime as the researcher normally wants to go through the interesting sections in real slow motion. Rolf Kalbermatter
  18. QUOTE (Tomi Maila @ May 15 2008, 10:22 AM) Er! The big question here is for what is this good? The human eye has a very limited time resolution so what is it that makes you or your customer believe that the actual display of every single frame to a very accurate time position is so important and not just the overal speed of the movie to the original timeline? Basically Windows is not real time and any other desktop OS neither. So they are more or less inherently unable to guarantee a whole video frame being transmitted every 40ms (25 frames per second) accurately to a time scale of only a few ms. So any normal video playing software just simply synchronizes the video stream timeline continously to the actual time, skipping frames whenever appropriate. On lower level (for instance when you control the Quicktime API directly but DirectX/Direct Play surely has similar capabilities) you can opt for frame accurate display instead of time accurate display but that usually means that the timeline of the playback is not synchronized with the original timeline anymore as it sooner or later starts to have a time lag. I do not see any way to guarantee both frame and time accurate display of movie material on non-dedicated hardware other than simply buying the greatest and highest performance hardware components, installing a hardware video decompressor that supports your video compression and preferably has a direct link (crosswire or dedicated PCIe channel) and the meanest and leanest OS you can possible get your hands on and keep your hands crossed that no system interrupts such as network traffic or other DMA transfers will mess up your timing anyhow. Now with dedicated hardware such as embedded systems with specially optimized RT OSes for media solutions this might be a different story. Rolf Kalbermatter
  19. QUOTE (marp84 @ May 16 2008, 03:41 AM) You need the Professional Version of LabVIEW or the Application Builder add-on in order to do that. Then read the User Manual about how to go about creating an executable. Rolf Kalbermatter
  20. QUOTE (Gary Rubin @ May 15 2008, 10:48 AM) Now you are exagerating a bit . I mean I've seen those "drivers" and they usually come from companies producing some hardware and wanting to make it available to LabVIEW users but they do not have a professional LabVIEW programmer and sometimes even just use the evaluation version of LabVIEW to create their drivers. It's in general a very bad idea to do since the technical support requests those companies create in such a way is huge and they have obviously no resources to support that. Which depending on the customer means: he is writing his own driver or abandones LabVIEW or the hardware in favor of a different product -> both cases result in a dissatisfied customer. Now I do write VI libraries too and develop "drivers" regularly. Some of them are openly available, some even free and I would hope that those libraries/drivers would not fall under your category of poorly written "LabVIEW SDKs". They definitly almost never use sequences and if they do it is for data dependency only and nothing else. That there are people that want to still rewrite them may be true but I would like to think that that has more to do with the "Not invented here" syndrome than anything else and I have to admit that I have been going down that path at times in the past too. Rolf Kalbermatter
  21. QUOTE (tengels @ May 14 2008, 07:52 AM) What is the serial interface? A USB to Serial converter? If so it may be a problem in its driver which VISA does not know how to deal with properly. I've seen strange behaviour with several USB to Serial adpaters in the past. Rolf Kalbermatter
  22. QUOTE (crelf @ May 14 2008, 01:41 PM) Never saw the white so far and for the Call Library Node it wouldn't make sense anyhow. LabVIEW can not determine if an external shared library or particular functions in it are reentrant safe. The programmer defines that in the Call Library configuration dialog (and if he says it is reentrant the according function better is or you are in for strange to rather nasty effects). RolfKalbermatter
  23. QUOTE (Michael_Aivaliotis @ May 15 2008, 03:12 AM) What is best and what not is very discutable. LVOOP is most probably not such a bad thing but such a driver restricts its use to applications that can and will use LVOOP. Also just as with normal VI libraries the usefullness and ease of use depends greatly on the person implementing that class. You can make a mess with (LV)OOP just as easily as with normal VI library interfaces and in fact even easier since you need to understand OOP fairly well to really deliver easily reusable class implementations. I'm sure this is biased by experiences with some C++ code which can be sometimes called horrible to understand at best but it's nevertheless a real experience and is also a result of my mind which likes visual representation very much but has much more affinity with a functional interface than with some of the more complex OOP design patterns. Rolf Kalbermatter
  24. QUOTE (BrokenArrow @ May 13 2008, 01:41 PM) Not sure about shared variables but TCP can be made fast in LabVIEW. And you do not even need to go on raw socket level. Just get a small VI from the NI site to disable the Nagle algorithme for a TCP network refnum and you are done without delays for small data packets making command-acknowledge type protocols getting slow. As to being compiled, as far as LabVIEW is concerned there should be little difference between development system and runtime system performance. If there is a big improvement the application builder would have to do something on the SV engine level that would be very spooky at best. Rolf Kalbermatter
  25. QUOTE (Gary Rubin @ May 13 2008, 01:56 PM) I don't think you can draw the line that strictly. Very strictly speaking the device driver is nowadays the piece of software that translates user application level requests into hardware specific cammands and address accesses. And that piece has to reside inside the kernel as kernel mode device driver since that is the only way to directly access hardware in nowadays protected mode OSes. However talking to that kernel device driver directly is tedious at best so they usually come with a DLL that provides an easier to use API and can be considered part of the driver as well. But with that I do not see any reason to exclude the LabVIEW VIs that access that API as being part of the driver either. After all they translate the not so easy to use DLL calls into something that can be used much more easily in LabVIEW. And once you are there why not qualify any collection of VIs that translates access to some form of hardware in something more LabVIEW friendly as a driver too? I wouldn't go as far as calling VIs to access the normal OS API as drivers though, but that is an entirely arbitrary and subjective classification on my part. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.