-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Floating number representation
Rolf Kalbermatter replied to Antoine Chalons's topic in LabVIEW General
QUOTE (jdunham @ Aug 9 2008, 06:22 PM) No! Unless they changed something in LabVIEW >= 8.2, casting or flattening will always give you a 16 byte memory for every single EXT. The reason for this is that LabVIEW tries to maintain a consistent flattened memory stream format on all platforms so it goes through the extra hassles of not only byteswapping the data to maintain Big Endian format on the flattened side but also of extending the plattform specific EXT format to the biggest possible common denomiator which is the now obsolete 16 byte extended format of the 68K floating point unit. Rolf Kalbermatter -
absolute timestamps with ms accuracy
Rolf Kalbermatter replied to giopper's topic in Calling External Code
QUOTE (Yair @ Aug 21 2008, 08:07 AM) The reason is that this time is actually generated and maintained by the good ol' 55 Hz timer tick interrupt counter from ancient DOS times. On startup some memory is initilialized with the time and date from the real time clock and this timer tick then continously maintains the increment of this memory. In regular intervals (not sure but usually once per 24 hours) the memory location is synchronized with the real time clock and/or an external time server (Windows Domain server or internet time server). Direct access to the real time clock is minimized since it is accessed through IO space and that is an extremely slow operation in comparison to nowadays computer components. In Windows 3.1 days there was a system.ini setting that allowed to make the system time being updated with the multi media timer instead which could have a 1ms resolution. But on heavy interrupt loaded systems (network traffic, IO boards such as DAQ and Video) this could lead to an overall degradation of the entire system to the point where normal application execution was almost impossible. Even on nowadays computers and with the default 55 Hz timer tick you usually see a significant drifting of the system time in between synchronization points when heavy interrupts occur such as with intense network traffic or a fast non DMA DAQ operation. Rolf Kalbermatter QUOTE (giopper @ Aug 31 2008, 04:32 PM) My explanation is that the error in the determination of the parameters of the linear relationship, although very small, on the long run can produce a large time error (I got 50 ms over one hour.) Another explanation is that only the PC clock is drifting (after all, 50 ms/h = 1.2 s/day only) but at the moment I have no way to verify this. As said the system time on Windows which is used by LabVIEW for its absilute time is really only synchronized on regular intervals with the real-time clock in the PC or an external time server. For the rest it is a purely interrupt driven software timer tick that can and usually does get some drift from other interrupt sources in the system. But even the real time clock, despite its name is anything but real time. It is a timer driven by a local quartz based oscillator whose quality is not to bad but definitly nothing to write home about either. 100ppm clock deviation is certainly something that should be already considered rather good in that respect so that would be in the worst case already up to 10 seconds a day. In reality the deviation of that real time clock is actually more in the range of a few seconds a day but it definitly is not real time. You can however setup Windows to regulary synchronize with an internet time server including NTP and should then get a more accurate timing. Still the default resolution of ~16ms will remain even then, although I thought this was at some point 10ms for NT based OSes. Rolf Kalbermatter -
Is really search engine optimization needed?
Rolf Kalbermatter replied to seostep.net's topic in LAVA Lounge
QUOTE (JiMM @ Aug 25 2008, 08:31 AM) One hit wonder and internet domain as user name :thumbdown: Seems like spam with a probablity bordering so close to certainity that I can't calculate the difference to real certainity with a double numeric in LabVIEW. Rolf Kalbermatter -
How is deny mode implemented in UNIX?
Rolf Kalbermatter replied to Jon Sjöstedt's topic in Calling External Code
QUOTE (Jon Sjöstedt @ Sep 3 2008, 03:39 AM) Not sure about Solaris really and Solaris in many ways is often a bit special. But on Linux it seems to be implemented by range locking the entire file. fcntl(fd, F_SETLK, &lock) Rolf Kalbermatter -
QUOTE (shoneill @ Sep 5 2008, 08:44 AM) And who created the begin of all? It's by definition an unanswerable question. Rolf Kalbermatter
-
QUOTE (shoneill @ Sep 5 2008, 06:56 AM) I think that goes over his book. How to create God. So those 2% are not created by God but do create it/him/her. An idea I would not completely dismiss but in the way he brings it it is not something I like very much. Rolf Kalbermatter
-
QUOTE (alfa @ Sep 3 2008, 03:11 AM) By saying that 98% of people are at animal level do you want to hint in any way that you are not part of those 98%? By making you stand out from the rest you would do really only one thing: pleasing and strengthening your ego. And that makes one being further away from any form of enlightment than any "intelligence on animal level". Rolf Kalbermatter
-
QUOTE (ragglefrock @ Aug 28 2008, 05:38 PM) Several reasons typecast could be slow with that. 1) Typecast is using BigEndian byte ordering internally. You may say now: but these are both numbers and not a bytestream, but for some very strange reasons, the byteordering for floating points in LabVIEW is not following this BigEndian scheme. So as far as I remember it will probably byte shuffle the data too when doing a typecast between integer and floating point. I know it doesn't do the right byte shuffling between byte stream and 2) Floating point values are put in the FPU to operate on them. Maybe Typecast does something unneccessary there since for a mere typecast involvement of the FPU certainly wouldn't be necessary. 3) 64 bit integers is a very recent addition to LabVIEW. If this was with LabVIEW 8.0 or maybe 8.2 it could have been that the typecast operation when 64bit integers were involved was anything but optimal. Rolf Kalbermatter
-
QUOTE (ragglefrock @ Aug 26 2008, 01:07 PM) You can typecast from any data into any datatype as long as both are flat. And there shouldn't really be a huge overhead by typecast. Since the memory stays usually the same. Its more a thing of the wire type (color) changing than anything else and that is an edit time operation, not a runtime one. For instance typecasting an uint16 enum into an int16 integer should not involve any data copying (unless the incoming wire is used somewhere else also inplace, but that is a simply dataflow requirement not something specific to typecast. If the memory size is not the same then yes there will have to be some data copying. But typecasting a 1D array into a string should really not cause a memory copy. The same memory area can be used and only its compiler type def is changed and the array length indicator adapted to indicate the size in bytes instead of in array elements. Rolf Kalbermatter
-
QUOTE (normandinf @ Aug 22 2008, 11:32 PM) No! Conversion and Typecasting are NOT the same. Conversion tries to maintain the numveric value. "Tries" because it can't always do that if you try to convert a number into a representation whose range is smaller than the current number. In that case the result is clamped (coerced) to the maximum/minimum possible value for that range. Typecast maintains the binary representation in memory. This means the numeric representation will in most cases change significantly. In the case of typecasting enums into numerics and vice versa you also have to watch out that both sides use the same number of integer bits (so a 16 bit unsigned/signed integer for instance). LabVIEWs Typecasting uses internally Big Endian stream representation. So Typecasting an i32 into an U16 enum will normally give you the value corresponding to the first enum entry since the upper most 16 bits of the i32 are likely 0. Rolf Kalbermatter
-
CodeWeavers Crossover with LabVIEW
Rolf Kalbermatter replied to Michael Aivaliotis's topic in LAVA Lounge
QUOTE (Val Brown @ Aug 26 2008, 01:37 AM) Ohh and don't even attempt to try and run a timed loop. It's internal mechanisme was last time I checked tightly coupled with drivers that go directly into the Windows kernel. Wine is an application level API translation software. They do not have nor want to try to provide a kernel level translation layer so far. Rolf Kalbermatter -
CodeWeavers Crossover with LabVIEW
Rolf Kalbermatter replied to Michael Aivaliotis's topic in LAVA Lounge
QUOTE (Michael_Aivaliotis @ Aug 18 2008, 11:12 AM) I haven't tried it recently! But CrossOver is basically based on Wine (with some extra hacks to make it sometimes work better for standard applications like the unavoidable Office Suites from an unnamed company in Redmond and in the case of Mac of course for the unmatched apps like iTunes etc.) LabVIEW 5 and 6 did run already many years ago (around 2000 or so) fairly well on Wine of that time. But its installer was also a lot lighter and less problematic than the super duper multi mega monster installer of recent LabVIEW versions. And also before LabVIEW 7 you could in the worst case just copy an entire LabVIEW tree over to the Wine system and run it from there without the need for an installation. On the other hand Codeweaver has done tremendous work on Wine to support the MSI installer technology and it is currently in a state that allows a lot of applications to install with little or no problems. So I think you have a realistic chance to get LabVIEW itself running on Wine and/or Crossover. Wine versus Crossover is here likely to make no difference since CodeWeavers has LabVIEW for obvious reasons not on their radar, although I think installation of Wine on a Mac is still supposed to be quite a bit of a hassle whereas CrossOver would seem to give you a smooth installation experience. Of course things like IO drivers are most probably not gonna work at all. This likely is even true for VISA and slightly possibly even TCP/IP. NI-DAQ and just about any other NI-something would be a waste of time to even attempt to try. I stopped with dabbling with LabVIEW on Wine after LabVIEW for Linux got available. It simply made not much sense anymore to deal with difficulties and some strange screen drawing artefacts when LabVIEW was run on Wine. Rolf Kalbermatter -
QUOTE (cmay @ Aug 7 2008, 07:10 PM) It's probably not possible at all. In order to run a Matlab DLL you have to have installed the Matlab runtime library or how it is called. And this library will likely rely on C runtime and Windwos API calls that are not present in the RT system. In addition a Matlab DLL would not even be compatibale at all with the VxWorks based realtime targets so the is no chance to get it to work there I guess. That is unless Matlab can create full C code for its scripts and you can get that to compile in the targets systems compiler tool chain. All in all not pretty work for sure if it is possible at all. Rolf Kalbermatter
-
QUOTE (MJE @ Aug 20 2008, 05:28 PM) Well you have two options: 1) Write a small .net assembly that does all the nitty gritty enumeration work and returns a collection of data to LabVIEW. 2) Or use the LabVIEW XML library from the internet toolkit, a fairly full featured interface to the Xerces Open Source Library from the Apache project 2.1) Maybe you could use the EasyXML Toolkit from JKI software Ok there is a third option that I would myself not consider an option: 3) Do everything in something else than LabVIEW. Rolf Kalbermatter
-
QUOTE (Scott Carlson @ Aug 1 2008, 05:30 PM) Well I didn't take it bad in any way! I actually welcome your comments to this. The two big reasons why I haven't done much with LabPython anymore is that I don't really use it for quite some time and there is little feedback other than sometimes a single statement like it doesn't work please help and when asked back what doesn't work and to please provide some simple example that can be used to reproduce it there is usually little reaction. That and of course the fact that there is lots and lots of other work to do and some of that is either paid or for one or the other reason closer to my heart than LabPython. Rolf Kalbermatter
-
QUOTE (Scott Carlson @ Aug 1 2008, 12:55 PM) The guy responsible for LabPython is in fact yours truely I think you are up to something when feeling that not deallocating the canvas object after execution is the culprit. And I'm not sure that assigning NULL to an object variable will actually properly clear that object although I must add that my knowledge about Python itself is limited. As to automatically cleaning up after a script: No there is no specific garbage collection done when de script finishes execution. I'm not even sure how to do that completely safe with Python hooks as it would be hard to track down all resources that a script might have allocated itself and there are certainly cases possible where this is not even desired. I do cleanup the Python state when the script node is disposed, respectively when using the VI interface to LabPython contrary to the script node, when the Python session is closed. This should in my opinion cleanup all Python related resources that might have been allocated during that session. If you want to reuse a Python resource between script executions you should probably pass this resource as a uInt32 out of the script and feed it back in on consequent execution. On the other hand all script variables are stored inside the Python state by name so as long as the state is not modified in that respect the object may be still allocated and valid on a consequent execution. While it may seem desirable to deallocate all resources at the end of each script execution automatically it was not my intention to actually emulate a Python command shell one by one. So state information will persist between script executions as long as the scribt is not unloaded which as it is implemented now will only happen when the VI containing the script node will leave memory or in the VI interface to LabPython when the session is closed. Rolf Kalbermatter
-
QUOTE (iowa @ Jul 21 2008, 06:06 PM) I'm doubt you find it. Automating user interface testing on a Windows API level is absolutely not trivial (hence contradicts your requirement of simple LabVIEW examples) and is the reason there exist software packages like AutoIt. If you really hope to build an extensive UI testing framework for non-LabVIEW applications don't go and try to use LabVIEW for it. Of course it can be done as there is virtually nothing that couldn't be done in LabVIEW if it can be done in a program at all, but it is going to be a major pain in the ######. Feel free to research the Windows API on MSDN and proof me wrong but asking for simple examples here is not likely to give you anything useful. Rolf Kalbermatter
-
QUOTE (Gerod @ Aug 1 2008, 02:43 AM) To your question: not really!But you have quite a few things in that script that access external components to Python. I'm not going to setup an SQL server and a PDF formating solution to test something like that. Can you reproduce that with a simple Python script too?The problem could be in the type conversion inside LabPython between LabVIEW and Python but it could be just as much be something in one of the other components involved or a bad interference between one of these components and the fact that Python runs embedded inside LabVIEW. It could be even that Python or one of these components does not like something LabVIEW does for its executable environment and the way LabPython is done it will run and execute Python and everything in there inside the LabVIEW process. In that case it could be a possibility that it is even LabVIEW version dependant. Rolf Kalbermatter
-
Retrieving VARIANT FAR* data through ActiveX
Rolf Kalbermatter replied to solarisin's topic in Calling External Code
QUOTE (solarisin @ Jul 30 2008, 02:44 PM) You need to give a bit more information here. Where do you see an array here? VARIANT* is a pointer to an OLE VARIANT which could contain an array in about 50 different type and format variants but it could be also a timstamp, numeric of any type either by value or reference, a NULL or quite a few other datatypes. The first word in that VARIANT record contains the code which tells you what the VARIANT really represents in terms of data. And the actual data, certainly in case of arrays or strings is not directly embedded in the variant structure since that structure is fixed size and can only hold directly information for I think 8 databytes. The rest is by reference meaning it is a pointer and after you receive such a VARIANT from somewhere you also need to make sure to release the resources that might be contained inside such a variant such as by using the according VariantClear() API from OLE. As to extracting data from a variant while you could do that by hand what is usually done is to verify that the vt_type is actually what you expect and then use the according OLE APIs (such as SafeArray....() ) to extract that information from the variant. Rolf Kalbermatter -
QUOTE (jlokanis @ Jul 25 2008, 12:36 AM) It is slow since you typically have to execute several methods/properties to update it. After each method/property node it is redrawn ... unless you use the defer panel update method. Switching that on before a tree control update sequence and then off afterwards makes updating a tree control a lot faster. Rolf Kalbermatter
-
QUOTE (JCFC @ Jul 17 2008, 08:14 PM) You can't with the Call Library Node directly! This is C++ with function pointer virtual tables and simply can't be implemented with the Call Library Node unless you want to write lots and lots of nitty gritty code on the LabVIEW diagram which takes care of things a C++ compiler does for you without you having to worry about anything. This is something were a wrapper DLL would be required which wraps the access to the virtual table function pointer into a normal C function and exports that one. However in this particular case there are several exported Shell32 APIs that deal with PIDLs and are in fact already this kind of wrapper you want here. So researching MSDN to see what kind of functionality shell32 contains will certainly give you an exported API that does more or less exactly what you want to do here. Rolf Kalbermatter
-
QUOTE (sisson @ Jul 15 2008, 01:46 PM) The SQL Toolkit uses ADO, the Windows ActiveX implementation. It's error codes are therefore also really Windows error codes. Windows error codes for COM, the technology where ADO/ActiveX is build on, are all unsigned and normally hex formatted. Your error results in a code 0x80030002 and looking in the Widnnws SDK for this error code shows STG_E_FILENOTFOUND from the subcategory FACILITY_STORAGE. So your guess that it has something to do with the UDL file does seem not so bad although it is not conclusive. It could be any other file involved in ADO handling of the database provider. Rolf Kalbermatter
-
QUOTE (MSpidey @ Jul 1 2008, 11:20 AM) Are you sure it is the MSP430 itself. I would expect problems with C runtime libraries that use string formatting to format data to be sent over a serial line but the low level interface hopefully doesn't care about such details. Rolf Kalbermatter
-
QUOTE (DrTrey @ Jun 30 2008, 04:41 PM) You give indeed to little information about the type of device and such. If it is some DAQ device or similar however the standard mode of operation is to write a device driver and the according user space DLL wrapper. That can be easily imported into LabVIEW through the call Library Node in a way that is completely independant of the LabVIEW version. Other means would be to use libusb to control your device. Interfacing LabVIEW to libusb through the Call Library Node is about the same in terms of version dependance but the libusb API is not really meant to be interfaced by high level applications like LabVIEW or VB and you will run in some issues that are usually most easily solved by writing a wrapper DLL that translates between the low level API and the LabVIEW Call library Node. Last but not least there is USB Raw device support in NI-VISA. Not sure though from which version of LabVIEW this VISA feature is properly supported. While the feature is completely implemented in VISA, LabVIEW needs to support some extra API access modes to use this feature. A quick look in LabVIEW 6.0 shows that the necessary Nodes to use USB through VISA are not present. LabVIEW 6.1 is the first to support this interface but you will need a recent version of VISA installed on your computer. The first VISA versions had lot's of difficulties supporting USB device RAW access. Princiapially option 1 will require you to really write C code. The other two options won't but can't be really considered a lot easier. You will simply implement the bit level protocol of your USB device in LabVIEW itself. Option two will still require some serious C knowledge to be able to properly interface to libusb through the Call Library Node. From that perspective using VISA to control your USB device would be probably the simplest solution but don't expect it to be trivial. One disadvantage of the two last options will be that you can't really leverage off that solution for non-LabVIEW users such as Visual Basic, Visual C, etc. Rolf Kalbermatter
-
User Defined Tags in Variant Type Library
Rolf Kalbermatter replied to Tomi Maila's topic in LabVIEW General
QUOTE (Tomi Maila @ Nov 9 2007, 07:58 PM) User defined tags and user defined refnums are in fact quite like normal LabVIEW refnums. The Tag variant looks like a VISA resource and can have a text label that identifies the instance of the object. The refnum type looks similar to a file refnum. Those refnum types allow creating an object/method/event hierarchy using *.rc files in <LabVIEW>/resources/objmgr. The ultimate implementation of that object hierarchy has to reside in a DLL that exports certain functions that are defined in the according rc file. I believe they exist since LabVIEW 7.0 and some of the necessary methods for that DLL to interface to LabVIEW exported C functions seem to have been accidentially? documented in the LabVIEW 7.1 and 8.0 extcode.h file. Still trying to get something working here is a very tiresome exercise with lots of crashing and obviously involves external C programming. Rolf Kalbermatter