Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,772
  • Joined

  • Last visited

  • Days Won

    242

Everything posted by Rolf Kalbermatter

  1. QUOTE (dannyt @ Aug 21 2008, 05:01 AM) There is no trick to get more information with native functionality. But there is an iptools.llb library over at the dark side I have posted there that calls into the Windows network management API and returns to you just about anything that you could get with ipconfig too, which by the way calls the same API anyhow. Rolf Kalbermatter
  2. QUOTE (Val Brown @ Aug 14 2008, 03:27 PM) I have only played with the LabVIEW Embedded Version in LabVIEW 7.1 the predecessor to the Microprocessor SDK. There seems to have been quite some changes incorporated since but doesn't the Advanced Signal Processing Toolkit contain specific VIs that call out to some DLL. If that is the case I do not see how you could port that DLL to your target CPU at all without the DLL source code and I think you won't be able to get that from NI. Rolf Kalbermatter
  3. QUOTE (osvaldo @ Aug 29 2008, 09:01 AM) It probably would run but will be quite a hog on the system. It's implementation from LabVIew 5.0 days was not bad for the tools and techniques available back then but it was definitly not written to be a concise application for resource constrained systems. On the other hand the PCs from those days had often less resources available than even the smaller real time targets nowadays, so it could still work well. Rolf Kalbermatter
  4. QUOTE (jdunham @ Aug 9 2008, 06:22 PM) No! Unless they changed something in LabVIEW >= 8.2, casting or flattening will always give you a 16 byte memory for every single EXT. The reason for this is that LabVIEW tries to maintain a consistent flattened memory stream format on all platforms so it goes through the extra hassles of not only byteswapping the data to maintain Big Endian format on the flattened side but also of extending the plattform specific EXT format to the biggest possible common denomiator which is the now obsolete 16 byte extended format of the 68K floating point unit. Rolf Kalbermatter
  5. QUOTE (Yair @ Aug 21 2008, 08:07 AM) The reason is that this time is actually generated and maintained by the good ol' 55 Hz timer tick interrupt counter from ancient DOS times. On startup some memory is initilialized with the time and date from the real time clock and this timer tick then continously maintains the increment of this memory. In regular intervals (not sure but usually once per 24 hours) the memory location is synchronized with the real time clock and/or an external time server (Windows Domain server or internet time server). Direct access to the real time clock is minimized since it is accessed through IO space and that is an extremely slow operation in comparison to nowadays computer components. In Windows 3.1 days there was a system.ini setting that allowed to make the system time being updated with the multi media timer instead which could have a 1ms resolution. But on heavy interrupt loaded systems (network traffic, IO boards such as DAQ and Video) this could lead to an overall degradation of the entire system to the point where normal application execution was almost impossible. Even on nowadays computers and with the default 55 Hz timer tick you usually see a significant drifting of the system time in between synchronization points when heavy interrupts occur such as with intense network traffic or a fast non DMA DAQ operation. Rolf Kalbermatter QUOTE (giopper @ Aug 31 2008, 04:32 PM) My explanation is that the error in the determination of the parameters of the linear relationship, although very small, on the long run can produce a large time error (I got 50 ms over one hour.) Another explanation is that only the PC clock is drifting (after all, 50 ms/h = 1.2 s/day only) but at the moment I have no way to verify this. As said the system time on Windows which is used by LabVIEW for its absilute time is really only synchronized on regular intervals with the real-time clock in the PC or an external time server. For the rest it is a purely interrupt driven software timer tick that can and usually does get some drift from other interrupt sources in the system. But even the real time clock, despite its name is anything but real time. It is a timer driven by a local quartz based oscillator whose quality is not to bad but definitly nothing to write home about either. 100ppm clock deviation is certainly something that should be already considered rather good in that respect so that would be in the worst case already up to 10 seconds a day. In reality the deviation of that real time clock is actually more in the range of a few seconds a day but it definitly is not real time. You can however setup Windows to regulary synchronize with an internet time server including NTP and should then get a more accurate timing. Still the default resolution of ~16ms will remain even then, although I thought this was at some point 10ms for NT based OSes. Rolf Kalbermatter
  6. QUOTE (JiMM @ Aug 25 2008, 08:31 AM) One hit wonder and internet domain as user name :thumbdown: Seems like spam with a probablity bordering so close to certainity that I can't calculate the difference to real certainity with a double numeric in LabVIEW. Rolf Kalbermatter
  7. QUOTE (Jon Sjöstedt @ Sep 3 2008, 03:39 AM) Not sure about Solaris really and Solaris in many ways is often a bit special. But on Linux it seems to be implemented by range locking the entire file. fcntl(fd, F_SETLK, &lock) Rolf Kalbermatter
  8. QUOTE (shoneill @ Sep 5 2008, 08:44 AM) And who created the begin of all? It's by definition an unanswerable question. Rolf Kalbermatter
  9. QUOTE (shoneill @ Sep 5 2008, 06:56 AM) I think that goes over his book. How to create God. So those 2% are not created by God but do create it/him/her. An idea I would not completely dismiss but in the way he brings it it is not something I like very much. Rolf Kalbermatter
  10. QUOTE (alfa @ Sep 3 2008, 03:11 AM) By saying that 98% of people are at animal level do you want to hint in any way that you are not part of those 98%? By making you stand out from the rest you would do really only one thing: pleasing and strengthening your ego. And that makes one being further away from any form of enlightment than any "intelligence on animal level". Rolf Kalbermatter
  11. QUOTE (ragglefrock @ Aug 28 2008, 05:38 PM) Several reasons typecast could be slow with that. 1) Typecast is using BigEndian byte ordering internally. You may say now: but these are both numbers and not a bytestream, but for some very strange reasons, the byteordering for floating points in LabVIEW is not following this BigEndian scheme. So as far as I remember it will probably byte shuffle the data too when doing a typecast between integer and floating point. I know it doesn't do the right byte shuffling between byte stream and 2) Floating point values are put in the FPU to operate on them. Maybe Typecast does something unneccessary there since for a mere typecast involvement of the FPU certainly wouldn't be necessary. 3) 64 bit integers is a very recent addition to LabVIEW. If this was with LabVIEW 8.0 or maybe 8.2 it could have been that the typecast operation when 64bit integers were involved was anything but optimal. Rolf Kalbermatter
  12. QUOTE (ragglefrock @ Aug 26 2008, 01:07 PM) You can typecast from any data into any datatype as long as both are flat. And there shouldn't really be a huge overhead by typecast. Since the memory stays usually the same. Its more a thing of the wire type (color) changing than anything else and that is an edit time operation, not a runtime one. For instance typecasting an uint16 enum into an int16 integer should not involve any data copying (unless the incoming wire is used somewhere else also inplace, but that is a simply dataflow requirement not something specific to typecast. If the memory size is not the same then yes there will have to be some data copying. But typecasting a 1D array into a string should really not cause a memory copy. The same memory area can be used and only its compiler type def is changed and the array length indicator adapted to indicate the size in bytes instead of in array elements. Rolf Kalbermatter
  13. QUOTE (normandinf @ Aug 22 2008, 11:32 PM) No! Conversion and Typecasting are NOT the same. Conversion tries to maintain the numveric value. "Tries" because it can't always do that if you try to convert a number into a representation whose range is smaller than the current number. In that case the result is clamped (coerced) to the maximum/minimum possible value for that range. Typecast maintains the binary representation in memory. This means the numeric representation will in most cases change significantly. In the case of typecasting enums into numerics and vice versa you also have to watch out that both sides use the same number of integer bits (so a 16 bit unsigned/signed integer for instance). LabVIEWs Typecasting uses internally Big Endian stream representation. So Typecasting an i32 into an U16 enum will normally give you the value corresponding to the first enum entry since the upper most 16 bits of the i32 are likely 0. Rolf Kalbermatter
  14. QUOTE (Val Brown @ Aug 26 2008, 01:37 AM) Ohh and don't even attempt to try and run a timed loop. It's internal mechanisme was last time I checked tightly coupled with drivers that go directly into the Windows kernel. Wine is an application level API translation software. They do not have nor want to try to provide a kernel level translation layer so far. Rolf Kalbermatter
  15. QUOTE (Michael_Aivaliotis @ Aug 18 2008, 11:12 AM) I haven't tried it recently! But CrossOver is basically based on Wine (with some extra hacks to make it sometimes work better for standard applications like the unavoidable Office Suites from an unnamed company in Redmond and in the case of Mac of course for the unmatched apps like iTunes etc.) LabVIEW 5 and 6 did run already many years ago (around 2000 or so) fairly well on Wine of that time. But its installer was also a lot lighter and less problematic than the super duper multi mega monster installer of recent LabVIEW versions. And also before LabVIEW 7 you could in the worst case just copy an entire LabVIEW tree over to the Wine system and run it from there without the need for an installation. On the other hand Codeweaver has done tremendous work on Wine to support the MSI installer technology and it is currently in a state that allows a lot of applications to install with little or no problems. So I think you have a realistic chance to get LabVIEW itself running on Wine and/or Crossover. Wine versus Crossover is here likely to make no difference since CodeWeavers has LabVIEW for obvious reasons not on their radar, although I think installation of Wine on a Mac is still supposed to be quite a bit of a hassle whereas CrossOver would seem to give you a smooth installation experience. Of course things like IO drivers are most probably not gonna work at all. This likely is even true for VISA and slightly possibly even TCP/IP. NI-DAQ and just about any other NI-something would be a waste of time to even attempt to try. I stopped with dabbling with LabVIEW on Wine after LabVIEW for Linux got available. It simply made not much sense anymore to deal with difficulties and some strange screen drawing artefacts when LabVIEW was run on Wine. Rolf Kalbermatter
  16. QUOTE (cmay @ Aug 7 2008, 07:10 PM) It's probably not possible at all. In order to run a Matlab DLL you have to have installed the Matlab runtime library or how it is called. And this library will likely rely on C runtime and Windwos API calls that are not present in the RT system. In addition a Matlab DLL would not even be compatibale at all with the VxWorks based realtime targets so the is no chance to get it to work there I guess. That is unless Matlab can create full C code for its scripts and you can get that to compile in the targets systems compiler tool chain. All in all not pretty work for sure if it is possible at all. Rolf Kalbermatter
  17. QUOTE (MJE @ Aug 20 2008, 05:28 PM) Well you have two options: 1) Write a small .net assembly that does all the nitty gritty enumeration work and returns a collection of data to LabVIEW. 2) Or use the LabVIEW XML library from the internet toolkit, a fairly full featured interface to the Xerces Open Source Library from the Apache project 2.1) Maybe you could use the EasyXML Toolkit from JKI software Ok there is a third option that I would myself not consider an option: 3) Do everything in something else than LabVIEW. Rolf Kalbermatter
  18. QUOTE (Scott Carlson @ Aug 1 2008, 05:30 PM) Well I didn't take it bad in any way! I actually welcome your comments to this. The two big reasons why I haven't done much with LabPython anymore is that I don't really use it for quite some time and there is little feedback other than sometimes a single statement like it doesn't work please help and when asked back what doesn't work and to please provide some simple example that can be used to reproduce it there is usually little reaction. That and of course the fact that there is lots and lots of other work to do and some of that is either paid or for one or the other reason closer to my heart than LabPython. Rolf Kalbermatter
  19. QUOTE (Scott Carlson @ Aug 1 2008, 12:55 PM) The guy responsible for LabPython is in fact yours truely I think you are up to something when feeling that not deallocating the canvas object after execution is the culprit. And I'm not sure that assigning NULL to an object variable will actually properly clear that object although I must add that my knowledge about Python itself is limited. As to automatically cleaning up after a script: No there is no specific garbage collection done when de script finishes execution. I'm not even sure how to do that completely safe with Python hooks as it would be hard to track down all resources that a script might have allocated itself and there are certainly cases possible where this is not even desired. I do cleanup the Python state when the script node is disposed, respectively when using the VI interface to LabPython contrary to the script node, when the Python session is closed. This should in my opinion cleanup all Python related resources that might have been allocated during that session. If you want to reuse a Python resource between script executions you should probably pass this resource as a uInt32 out of the script and feed it back in on consequent execution. On the other hand all script variables are stored inside the Python state by name so as long as the state is not modified in that respect the object may be still allocated and valid on a consequent execution. While it may seem desirable to deallocate all resources at the end of each script execution automatically it was not my intention to actually emulate a Python command shell one by one. So state information will persist between script executions as long as the scribt is not unloaded which as it is implemented now will only happen when the VI containing the script node will leave memory or in the VI interface to LabPython when the session is closed. Rolf Kalbermatter
  20. QUOTE (iowa @ Jul 21 2008, 06:06 PM) I'm doubt you find it. Automating user interface testing on a Windows API level is absolutely not trivial (hence contradicts your requirement of simple LabVIEW examples) and is the reason there exist software packages like AutoIt. If you really hope to build an extensive UI testing framework for non-LabVIEW applications don't go and try to use LabVIEW for it. Of course it can be done as there is virtually nothing that couldn't be done in LabVIEW if it can be done in a program at all, but it is going to be a major pain in the ######. Feel free to research the Windows API on MSDN and proof me wrong but asking for simple examples here is not likely to give you anything useful. Rolf Kalbermatter
  21. QUOTE (Gerod @ Aug 1 2008, 02:43 AM) To your question: not really!But you have quite a few things in that script that access external components to Python. I'm not going to setup an SQL server and a PDF formating solution to test something like that. Can you reproduce that with a simple Python script too?The problem could be in the type conversion inside LabPython between LabVIEW and Python but it could be just as much be something in one of the other components involved or a bad interference between one of these components and the fact that Python runs embedded inside LabVIEW. It could be even that Python or one of these components does not like something LabVIEW does for its executable environment and the way LabPython is done it will run and execute Python and everything in there inside the LabVIEW process. In that case it could be a possibility that it is even LabVIEW version dependant. Rolf Kalbermatter
  22. QUOTE (solarisin @ Jul 30 2008, 02:44 PM) You need to give a bit more information here. Where do you see an array here? VARIANT* is a pointer to an OLE VARIANT which could contain an array in about 50 different type and format variants but it could be also a timstamp, numeric of any type either by value or reference, a NULL or quite a few other datatypes. The first word in that VARIANT record contains the code which tells you what the VARIANT really represents in terms of data. And the actual data, certainly in case of arrays or strings is not directly embedded in the variant structure since that structure is fixed size and can only hold directly information for I think 8 databytes. The rest is by reference meaning it is a pointer and after you receive such a VARIANT from somewhere you also need to make sure to release the resources that might be contained inside such a variant such as by using the according VariantClear() API from OLE. As to extracting data from a variant while you could do that by hand what is usually done is to verify that the vt_type is actually what you expect and then use the according OLE APIs (such as SafeArray....() ) to extract that information from the variant. Rolf Kalbermatter
  23. QUOTE (jlokanis @ Jul 25 2008, 12:36 AM) It is slow since you typically have to execute several methods/properties to update it. After each method/property node it is redrawn ... unless you use the defer panel update method. Switching that on before a tree control update sequence and then off afterwards makes updating a tree control a lot faster. Rolf Kalbermatter
  24. QUOTE (JCFC @ Jul 17 2008, 08:14 PM) You can't with the Call Library Node directly! This is C++ with function pointer virtual tables and simply can't be implemented with the Call Library Node unless you want to write lots and lots of nitty gritty code on the LabVIEW diagram which takes care of things a C++ compiler does for you without you having to worry about anything. This is something were a wrapper DLL would be required which wraps the access to the virtual table function pointer into a normal C function and exports that one. However in this particular case there are several exported Shell32 APIs that deal with PIDLs and are in fact already this kind of wrapper you want here. So researching MSDN to see what kind of functionality shell32 contains will certainly give you an exported API that does more or less exactly what you want to do here. Rolf Kalbermatter
  25. QUOTE (sisson @ Jul 15 2008, 01:46 PM) The SQL Toolkit uses ADO, the Windows ActiveX implementation. It's error codes are therefore also really Windows error codes. Windows error codes for COM, the technology where ADO/ActiveX is build on, are all unsigned and normally hex formatted. Your error results in a code 0x80030002 and looking in the Widnnws SDK for this error code shows STG_E_FILENOTFOUND from the subcategory FACILITY_STORAGE. So your guess that it has something to do with the UDL file does seem not so bad although it is not conclusive. It could be any other file involved in ADO handling of the database provider. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.