Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I don't have any experience with them, but it doesn't surprise me that there are issues with that. Such managers would have to hook deep into the Windows graphic subsystem. Since Microsoft never really has planned for such an interface there are many issues to be expected since such a manager will have to intercept various low level windows messages, some of them not even documented anywhere and also with differences between various Windows versions. So they will all have some issues somewhere as they will almost never be able to fully transparently hook into the Windows graphic subsystem. X Windows which has actually a quite clear client server architecture and therefore a well documented interface between the two is also having such issues depending on X server and windows manager version and implementation since even there the servers and clients don't always implement every functionality fully correct. Imagine how harder it must be to create a properly working window manager on a system where most of this interaction is in fact undocumented and officially unexposed.
  2. The first thing you should do is to change the library path in the Call Library Node to only explicitedly say USER32.DLL. You have currently a path in there and not just a name, and in that case the LabVIEW application Builder believes that the DLL is private to the app and adds it automatically to the data folder in your app. LabVIEW then will try to load that private DLL and call its function but the handle you do pass it comes from USER32.DLL in the system folder and has absolutely no meaning inside the private copy of USER32.DLL and therefore crashes. The problem is in fact already happening when LabVIEW tries to load its local copy of USER32.DLL. This DLL interacts on many levels with other Windows kernel internas and tries to do all kinds of stuff to initialize the Windows API system. However that conflicts with the initialization the system DLL has made when it was loaded at computer startup and therefore Windows simply shutdowns the process. After you rebuilt your app, make sure you don't end up with a user32.dll file inside your built application directory anymore. This should fix the crash in the built app. Another change you should do is to make the HWND control an U64 control, remove the U32 conversion in Get HWnd From VI Ref.vi, and change the according parameter in the Call Library Node to be a pointer sized integer. Otherwise you will run into nasty stuff again when you ever move your application to LabVIEW for Windows 64 bit.
  3. In other words to what Logman said: if you have a bag that is to small to put in 6 apples in their own little baggy, then try to put in 1 apple a time without baggy, you still won't be able to to put in all 6 apples!
  4. Performance is likely not that much of a concern in this application but I would definitely implement this as a single 64 bit integer (or maybe two 32 bit integers) and use boolean logic on it to compare and work with it. "old integer" EXOR "new integer" != 0 will tell you if something has changed and then you can eventually detect which bits changed too. It's a little bit more mathematics than trying to do it with a boolean array but works much faster and with less code.
  5. This is all nice and good as long as you can assume that you deal with ANSI strings only (with optionally an extended character set, which is however codepage dependent and therefore absolutely not transparently portable from one computer to the next). And it is not even fully right in LabVIEW now, since LabVIEW really uses multibyte (MBCS) encoding. So autinindexing over a string has a serious problem. Some would expect it to return bytes, I would expect it to return characters, which is absolutely not the same in MBCS and UTF encoding. The only way to represent a MBCS or UTF character as a single numeric on any platform would be to use ultimately UTF32 encoding, which requires 32 bit characters, but not all platforms on which LabVIEW runs support that out of the box and adding iconv or icu support to a realtime platform has some far reaching consequences in terms of extra dependencies and performance. Java internally uses exclusively Unicode and yes you have to iterate over a string by converting it to a character array or indexing the character position explicitly. And there is a strict seperation between byte stream formats and string formats. Going from one to the other always requires an explicit conversion with an optional encoding specification (most conversions also allow default encoding conversion which is usually UTF8).
  6. Currently you are quite right. However once NI adds Unicode support (if and when they do it) you will run into problems if you just assume that string == byte array. So better get used to the idea that they might not be the same. And there is in fact an INI key that adds prelimenary Unicode support to LabVIEW, however it still causes more trouble than it solves because of many reasons among which the following: The problem for NI here is that the traditional "string==byte array" principle has caused a lot of legacy code that is basically impossible to not break when adding Unicode support. There was once a discussion by AQ where he posed the radical change of string handling in LabVIEW to allow proper support of Unicode. All byte stream nodes such as VISA Read and Write and TCP Read and Write etc would change to accept byte arrays as input. And there would probably be a new string type that could represent multi byte and wide char strings while the current string type would slowly get depreciated. Difficulties here are that the various LabVIEW platforms support different types of wide chars (UTF16 on Windows, UTF32 on Unix and only UTF8 at most on most realtime systems). So handling those differences in a platform independent manner is a big nightmare. Suddenly string length can either mean byte length, which is different on different platforms or character length which is quite time consuming to calculate for longer strings. Most likely when flattening/converting strings to a byte stream format they would have to be translated to UTF8 which is the largest common denominator for all LabVIEW platforms (and the standard format for the web nowadays). All in all a very large and complicated undertaking but one NI is certainly working on in the background for some years already. Why they haven't started to change the bytestream nodes to at least accept also byte arrays or maybe better even change them to take byte arrays only, I'm not sure.
  7. Well that is of course the most easy solution but maybe not the quickest . Alternatively you could make use of the unicode.llb library in this post.
  8. Hmmmm, that TCHAR in there!! Do you compile your test application as Unicode or ASCI?
  9. That is C++ name mangling. The developer of the DLL forgot to tell the compiler/linker to not mangle the exported function names. It looks ugly and can be inconvinient but can't be the reason for your runtime error. Looking at the function prototype and the CLN configuration I have to say that there is no obvious problem visible. I would rather suspect that the DLL needs to have another function called first before you are allowed to call this function. Or there is a bug in the DLL (well I would consider the possible requirement to first call another function before you can call Initialize() a bug in itself ).
  10. No you haven't dreamed but the Vision Builder makes use of a NI special runtime engine that is located in lvfrt.dll. This is a private build that supposedly works as a runtime engine but with most of the compiling and other full development system still in place. However I have no idea how one would create an installer that deploys this runtime version rather than the standard version. If your only concern would be the variable names, I do have a completely rewritten version of the formula parser that allows for arbitrary (well must start with alpha character and then can contain any alphanumeric character and the underscore) and performs quite a bit better than the NI parser as it does not do all kinds of complicated string reformating on the input string. It also has a clean string to RPN parser so that as long as the string doesn't change one can simply execute the RPN evaluator only, which speeds up the actual calculation of the formula even more. The code was posted long ago here.
  11. Not ideal but the only possible solution then: Create a string constant and copy the script into it and then put the script in a case structure and the string constant in the other case of the structure. Now you can call the VI with a boolean input control to execute the string or retrieve the contents. Requires some discipline to update the string constant when the developer makes changes to the script but is the most straightforward and simple solution if you do not want to go with an external script solutions. Or alternatively if your required scripting formulas are not to complex you might go with the script parser that comes as example in LabVIEW. It is however limited to basic mathematical operations, doesn't support arrays, and interprets the formula each time you execute it.
  12. I don't think any of these methods will work in a build executable. 1) because it is not supported in the runtime engine, 2) because it makes use of the node in 1) and 3) only maybe if you set the VI explicitedly to not remove the diagram when building the executable. The text of the script is part of the diagram like anything else. LabVIEW compiles this in fact into code and then the script text is not really necessary anymore in a build application and therefore gets removed with the diagram. But even if you leave the diagram there it may still not work as a lot of VI scripting operations do not work in the runtime engine since they don't make to much sense anyways when there is usually anyhow no diagram to work on. Also since the formula node is really compiled into the rest of the VI code, you can't change it in a build application anyways as it would need to be recompiled but the runtime engine does not have any compilation possibilities. What you want to do is probably using some external scripting solution like LabPython or Lua for LabVIEW as there you can have the script in the form of a string that you can load from disk or simply contain in your VI and even let the user modify to run the changed code.
  13. Well, not sure it provides more options than the ActiveX interface. And there are quite a lot of Excel related hierarchies such as Microsoft.Office.Interop.Excel, Microsoft.Office.Tools.Excel, etc, but only the first one seems to include public accessible contructors for the various interesting objects. Personally I think trying to interface to Excel through .Net is mostly an exercise with little merits other than learning about .Net interfacing.
  14. Unless you use LabVIEW for Windows 64 Bit, you can have 100's of GB of memory and your application is still limited to 2GB of memory maximum. This is inherent to the 32 bit architecture. Due to various components loaded by LabVIEW the limit of usable memory is usually more around 1.3GB. The 2GB is maximum usable memory and there needs to fit in your application, any modules it may use and some temporary management information for LabVIEW and Windows. The Excel plugin being ActiveX doesn't really help this as ActiveX in itself is quite a heavy weight technology. But don't worry, .Net wouldn't be better.
  15. You are of course right about only needing two comparisons too. However you switched the Native and AQ time control. And interestingly the "real" AQ comparison is ALWAYS faster on my machine than the other two, with your comparison usually being slightly slower than the native one. However the overall variation between runs is generally higher than the difference between the three methods in one run.
  16. At least in the Beta, adding superSecretQuantumVersion to the LabVIEW.ini file seems to magically make that documentation available in the help file though.
  17. Yep, we use million, milliard, billion, biliard, trillion, trilliard, and so on. Seemed very logical and universal to me until I went to the States!
  18. But you need to do the tests Aristos Queue mentioned in order to coerce. If they did your test first they still would have to find out with potentially two other comparisons if they need to coerce to the upper bound or the lower one, making your test just degrading performance for the out of range case. Also an interesting challenge to think about, which limit would be the one you would have expected the NaN value to be coerced to? Upper or lower? Both are equally logical (or rather illogical)
  19. According to this site a Lakh is 100,000 and a Lac is 10 times more. Maybe there are different Lacs in different areas of India. I personally find it already complicated enough to distinguish between an American billion and an European billion. Don't think I'm going to really memorize the Indian huge numbers that easily, especially if it should turn out that they are not the same all over India. For the rest LogMAN more or less gave you the details about memory management. Since your report generation is interfacing to the Excel engine through ActiveX it is really executing inside the LabVIEW process and has to share its memory with LabVIEW. As LogMAN showed one worksheet with your 64 * 600,000 uses up 650 MB RAM. 3 worksheets already will require ~1.8GB RAM just for the Excel workbook. That leaves nothing for LabVIEW itself on a 32 bit platform and is still very inefficient and problematic even on 64 Bit LabVIEW.
  20. I'm afraid you tax the Excel engine to much. So you say you try to create an Excel Workbook which has 6 worksheets with 64 columns each with 6 million samples? Make the math: 6 * 64 * 6,000,000 = 2.3 billion samples with each sample requiring on average more than 8 bytes. (Excel really needs quite a bit more than that as it has to also store management and formatting information about the workbook, worksheet, colomns and even cells.) It doesn't matter if you do it one sample or one colomn or one worksheet a time. The Excel engine will have to at least load references to the data each time you append new data to it. With your numbers it is clear that you create an Excel worksheet that never will fit in any current computer system. Aside from the fact that Excel had in Office 2003 a serious limitations that did not allow to have more than 64k rows and 256 columns. This was increased to 1048576 rows by 16384 columns in Excel 2007 and has stayed there since. Here you can see the current limits for an Excel worksheet: http://office.microsoft.com/en-us/excel-help/excel-specifications-and-limits-HA103980614.aspx You might have to overthink your strategy about how you structure your data report. What you currently try to do is not practical at all in view of later data processing or even just review of your data. Even if you could push all this data into your Excel workbook, you would be unable to open it on almost any but the most powerful 64 bit server machine.
  21. LabVIEW allows in its TCP/IP functions to specify a service name instead of a port number. This also works for the server side. So when you start up a server (with Create Listener) and specify a service name instead of a port number, the function will open an arbitrary port that is not yet used and register this port number together with the service name in the local NI service name registry (a little network service running on the local computer). Any client does not have to know the port number of the service (which can change between invocations) but only its service name. Not sure if webservices also make use of this feature, but it is clear that a meaningful service name is much easier to remember than an arbitrary port number.
  22. That is not a feature of passing a handle by reference or not, but of the handle itself. Since there is an intermediate pointer, the contents of the handle can be resized without invalidating the handle itself. Of course you now have to be very careful about race conditions as now you can in fact maintain a handle somewhere in your code and change it at any time you like right at the point LabVIEW itself decides to work on the handle. This is a complete nogo. The original question about passing a handle by value or by reference is similar about passing any other variable type by value or reference. The handle itself can only be modified inside the function and passed back to the caller when it is passed by reference. Never mind that because of the intermediate pointer reference inside the handle you can always change the contents of the handle anyways. But you can not change the handle itself if it was passed by value. While you always can modify a handle even if it was passed by value, passing it by reference has potentially some performance benefits. When you pass a handle by value, LabVIEW has to allocate an empty handle to pass it into your function and you then resize it, which is at least one more memory allocation. If you pass the reference of the handle and you do not want to pass some array data into the function anyhow, LabVIEW now can simply pass in a NULL handle and your code only allocates a handle when needed. In the first case you have two allocations (one for the handle pointer and one for the data area in the handle) and then a reallocation for the data pointer. When configuring the handle to be passed by reference you have only the two initial memory allocations and no reallocation at all.
  23. Think about it. How should the OS socket driver decide to whom to send incoming data if more than one process was allowed to bind to the same port? Another related issue would be that anyone could then bind to any port and eavesdrop the communication of another application without the need for a promiscuous capture driver involved, which needs administrator privileges to get installed and started, so if an attacker has those privileges you have much bigger problems than listening on a network communication.
  24. Without knowing how the data got stored into the database first, it will be very hard to come up with a good idea. Was the path flattened and then stored as string? You basically have to reverse that operation exactly to have any chance of getting a sensible result. The idea about using index array from Shaun is however definitely the first step in this. Your query returns an array of values and you want to get the single value in the query result first before you do any other manipulation to get back at your path.
  25. Duplicate post here CINs are really just special DLLs on Windows and as long as NI doesn't remove the ability from existing LabVIEW platforms to load a CIN it will keep working. Note however that because of that a CIN is platform specific and that means that the VI containing such a CIN will not load without error on another platform. This includes any LabVIEW version on Linux, MacOSX, and even LabVIEW for Windows 64 Bit. And no it's not about the OS version at all but about for what OS platform the LabVIEW is meant. So LabVIEW for Windows 32 Bit will load your CIN irrespective of 32 bit or 64 bit Windows OS, but LabVIEW for Windows 64 Bit won't. And even if you wanted, there is no way to port the CIN to LabVIEW for 64 Bit Windows versions or any other upcoming LabVIEW platform such as just about every LabVIEW Realtime platform (with the exception of LabVIEW Pharlap ETS based ones which use the Win32 model) and future LabVIEW 64 Bit versions, since NI has removed that ability from all new LabVIEW platforms that came out since about version 8.0. And since LabVIEW 2010, the tools to create CINs have been for good removed in all LabVIEW versions, although they keep loading CINs that have been created with older LabVIEW versions for that same platform.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.