Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. At least in the Beta, adding superSecretQuantumVersion to the LabVIEW.ini file seems to magically make that documentation available in the help file though.
  2. Yep, we use million, milliard, billion, biliard, trillion, trilliard, and so on. Seemed very logical and universal to me until I went to the States!
  3. But you need to do the tests Aristos Queue mentioned in order to coerce. If they did your test first they still would have to find out with potentially two other comparisons if they need to coerce to the upper bound or the lower one, making your test just degrading performance for the out of range case. Also an interesting challenge to think about, which limit would be the one you would have expected the NaN value to be coerced to? Upper or lower? Both are equally logical (or rather illogical)
  4. According to this site a Lakh is 100,000 and a Lac is 10 times more. Maybe there are different Lacs in different areas of India. I personally find it already complicated enough to distinguish between an American billion and an European billion. Don't think I'm going to really memorize the Indian huge numbers that easily, especially if it should turn out that they are not the same all over India. For the rest LogMAN more or less gave you the details about memory management. Since your report generation is interfacing to the Excel engine through ActiveX it is really executing inside the LabVIEW process and has to share its memory with LabVIEW. As LogMAN showed one worksheet with your 64 * 600,000 uses up 650 MB RAM. 3 worksheets already will require ~1.8GB RAM just for the Excel workbook. That leaves nothing for LabVIEW itself on a 32 bit platform and is still very inefficient and problematic even on 64 Bit LabVIEW.
  5. I'm afraid you tax the Excel engine to much. So you say you try to create an Excel Workbook which has 6 worksheets with 64 columns each with 6 million samples? Make the math: 6 * 64 * 6,000,000 = 2.3 billion samples with each sample requiring on average more than 8 bytes. (Excel really needs quite a bit more than that as it has to also store management and formatting information about the workbook, worksheet, colomns and even cells.) It doesn't matter if you do it one sample or one colomn or one worksheet a time. The Excel engine will have to at least load references to the data each time you append new data to it. With your numbers it is clear that you create an Excel worksheet that never will fit in any current computer system. Aside from the fact that Excel had in Office 2003 a serious limitations that did not allow to have more than 64k rows and 256 columns. This was increased to 1048576 rows by 16384 columns in Excel 2007 and has stayed there since. Here you can see the current limits for an Excel worksheet: http://office.microsoft.com/en-us/excel-help/excel-specifications-and-limits-HA103980614.aspx You might have to overthink your strategy about how you structure your data report. What you currently try to do is not practical at all in view of later data processing or even just review of your data. Even if you could push all this data into your Excel workbook, you would be unable to open it on almost any but the most powerful 64 bit server machine.
  6. LabVIEW allows in its TCP/IP functions to specify a service name instead of a port number. This also works for the server side. So when you start up a server (with Create Listener) and specify a service name instead of a port number, the function will open an arbitrary port that is not yet used and register this port number together with the service name in the local NI service name registry (a little network service running on the local computer). Any client does not have to know the port number of the service (which can change between invocations) but only its service name. Not sure if webservices also make use of this feature, but it is clear that a meaningful service name is much easier to remember than an arbitrary port number.
  7. That is not a feature of passing a handle by reference or not, but of the handle itself. Since there is an intermediate pointer, the contents of the handle can be resized without invalidating the handle itself. Of course you now have to be very careful about race conditions as now you can in fact maintain a handle somewhere in your code and change it at any time you like right at the point LabVIEW itself decides to work on the handle. This is a complete nogo. The original question about passing a handle by value or by reference is similar about passing any other variable type by value or reference. The handle itself can only be modified inside the function and passed back to the caller when it is passed by reference. Never mind that because of the intermediate pointer reference inside the handle you can always change the contents of the handle anyways. But you can not change the handle itself if it was passed by value. While you always can modify a handle even if it was passed by value, passing it by reference has potentially some performance benefits. When you pass a handle by value, LabVIEW has to allocate an empty handle to pass it into your function and you then resize it, which is at least one more memory allocation. If you pass the reference of the handle and you do not want to pass some array data into the function anyhow, LabVIEW now can simply pass in a NULL handle and your code only allocates a handle when needed. In the first case you have two allocations (one for the handle pointer and one for the data area in the handle) and then a reallocation for the data pointer. When configuring the handle to be passed by reference you have only the two initial memory allocations and no reallocation at all.
  8. Think about it. How should the OS socket driver decide to whom to send incoming data if more than one process was allowed to bind to the same port? Another related issue would be that anyone could then bind to any port and eavesdrop the communication of another application without the need for a promiscuous capture driver involved, which needs administrator privileges to get installed and started, so if an attacker has those privileges you have much bigger problems than listening on a network communication.
  9. Without knowing how the data got stored into the database first, it will be very hard to come up with a good idea. Was the path flattened and then stored as string? You basically have to reverse that operation exactly to have any chance of getting a sensible result. The idea about using index array from Shaun is however definitely the first step in this. Your query returns an array of values and you want to get the single value in the query result first before you do any other manipulation to get back at your path.
  10. Duplicate post here CINs are really just special DLLs on Windows and as long as NI doesn't remove the ability from existing LabVIEW platforms to load a CIN it will keep working. Note however that because of that a CIN is platform specific and that means that the VI containing such a CIN will not load without error on another platform. This includes any LabVIEW version on Linux, MacOSX, and even LabVIEW for Windows 64 Bit. And no it's not about the OS version at all but about for what OS platform the LabVIEW is meant. So LabVIEW for Windows 32 Bit will load your CIN irrespective of 32 bit or 64 bit Windows OS, but LabVIEW for Windows 64 Bit won't. And even if you wanted, there is no way to port the CIN to LabVIEW for 64 Bit Windows versions or any other upcoming LabVIEW platform such as just about every LabVIEW Realtime platform (with the exception of LabVIEW Pharlap ETS based ones which use the Win32 model) and future LabVIEW 64 Bit versions, since NI has removed that ability from all new LabVIEW platforms that came out since about version 8.0. And since LabVIEW 2010, the tools to create CINs have been for good removed in all LabVIEW versions, although they keep loading CINs that have been created with older LabVIEW versions for that same platform.
  11. Ubuntu can be made to work but depending on your Linux hacker abilities might be either a nice challenge or a nightmare. The biggest problem is the packaging since LabVIEW comes as RPM package while Ubuntu wants Debian. It can be worked around with various tools but at least in older LabVIEW versions you then easily run into libc compatibility issues with the database manager interface that comes in the LabVIEW distribution to handle non RPM systems. Not sure if that has changed in newer LabVIEW distributions. After that you have the potential to run into libc issues with LabVIEW itself and might have to add the correct symlink to the correct libc library to allow LabVIEW to run properly. Not to speak about other library problems with OpenGL Mesa for instance depending on your Ubuntu version. Everything is usually solvable by either installing the right version of the affected library or sometimes just by symlinking the expected library to the actually installed one, and in one case that I saw by disabling some new kernel feature that messed up an older LabVIEW version, but without some deeper Linux hackability knowledge it is likely a painful exercise that might lead nowhere to a successful installation.
  12. It's quite simple. Adapt To Type will pass the LabVIEW native datatype to the shared library. This means it will pass a cluster as its structure pointer to the shared library for instance. The option yous see to choose between Handles by Value vs. Pointers to Handles only applies to the case where the parameter itself is a LabVIEW handle (String and Arrays), nothing else! It has no influence on embedded handles inside a cluster for instance. And it has no influence on any other LabVIEW datatype passed with the Adapt to Type configuration. I think that all said the meaning is simply understood. Basically it makes that your LStrHandle (or whatever Array type you have connected to the parameter) is either passed as LStrHandle string or LStrHandle *string. This has various implications. A Handle param parameter will always be passed into the function as a valid handle even when it contains 0 elements. A Handle *param value can be actually a NULL handle for an empty handle and your code better is prepared to account for that by making sure the handle is properly resized or allocated depending on that. The function NumericArrayResize() will properly handle both cases for you. Array Data Pointer will pass the pointer to the actual data portion inside the handle to the shared library. Note that this only really makes sense for datatypes where you will also pass in the actual lenght of the array to the shared library function as LabVIEW will NOT append a zero terminating character to a string for instance, so the library would otherwise have no idea how long the string array really is. The last one "Interface to Data" supposedly passes the interface pointer to the internal LabVIEW data object. This is analogous to a Variant but not the same. Supposedly you can then work on the data from within your C code with the interface as documented in ILVDataInterface.h. This is an object oriented interface based on the Windows COM specification but as it seems implemented in a LabVIEW platform independent way. Very powerful for automatic adapting C interfaces, that might work fully polymorphic if there wasn't the issue that you still can't have LabVIEW VIs that support such a template like approach directly.
  13. I just had a quick look at the wikipedia article for mDNS and if they are right you have quite a problem to tackle. It seems that the IP frame needs to be built specifically to target a certain MAC address. If that is really the case this would only work by opening directly a raw socket on whatever platform API your system has to send such an IP frame. Extra trouble in this is that on all modern OSes, raw sockets are privileged system resources that can only be opened by processes that have the according privileges (admin elevation on Windows, specific raw socket privilege on Unix systems). Maybe the wikipedia article is wrong in this and you can simply send a multicast UDP package to the according multicast address 224.0.0.251 and port 5353.
  14. C(++) code does usually vary less over time but extremely between developers. Some prefer to make it look like an Armadillo has been walking over the keyboard while others will spend more time into getting the brackets and spaces perfect than writing the actual code. I personally tend to prefer the neatly formatted C code as it simply helps me understand the code more easily when looking at it a few weeks later. LabVIEW code certainly tends to change its style over time, partly because new features make it simply much easier to write something, partly because new insight and experiences make you write different code to safeguard against all kinds of regular programming errors that you have come across over the time. But even here the variations between developers is usually a lot greater than between code I have written now or a few years ago. However looking at code I wrote in LabVIEW 3.x certainly makes me wonder how I ever could have written it in such a way. I doubt that it was a recent change (>= LabVIEW 6 or 7). Any comparison with NaN is according to IEEE considered to be always false. Even (NaN == NaN) should give false. And LabVIEW tried to follow the IEEE standard since its early days but I do remember that they had some issues in very early versions of LabVIEW around LabVIEW 2.5/3.0. Now it could be that they broke this in some LabVIEW version and fixed it in the next and your colleague run into this. But it seems unlikely that they had not employed the correct behavior before if you are not talking about very old LabVIEW versions.
  15. UTC time in ticks is very unspecific. Ticks is simply an arbitrary unit with an arbitrary reference time. Traditionally a tick was the 55ms interval timer tick used under DOS and Windows 16 bit and even early versions of Windows NT used that timer tick. Newer Windows versions use a timer tick of 1ms internally but still have functions that scale that to 55ms. The reference time is usually the start of the machine. Obviously when you talk about UTC you most likely mean a more absolute value than the start time of the computer. Still, the reference time is very arbitrary. While the .Net DateTime version uses midnight, January 1, 0001 (supposedly UTC but the .Net DataTime.Tick() description is entirely unclear about this) with 100ns resolution, LabVIEW uses midnight Jan 1, 1904, GMT as reference with a 1s resolution. Windows itself has several different formats such as the C runtime time_t which is typically referenced to midnight Jan 1, 1970 UTC with a 1s resolution (same as what most Unixes or more specifically the C runtime library on Unix uses). But Windows also has a FILETIME format which is referenced to midnight Jan 1, 1601, UTC with a resolution of 100ns. Now LabVIEW's timestamp format supports in fact fractional seconds with a resolution of 1/2^32 s and internally retrieves its values from a FILETIME value so if you convert the timestamp into a floating point value you still get about 1/2^20 s accuracy there for the foreseeable future which is about 1us. So if your reference time doesn't have to be specifically the .Net DataTime value all you would need to do is likely to simply convert the LabVIEW timestamp into a floating point value and you can forget about any external DLLs.
  16. As ned said, LabVIEW will not parse DLLs to scan them for secondary dependencies and it should not even try to either. If you have more than the main DLL anyhow, you should have an installer which takes care about putting all the DLLs in the right places. Many secondary DLL dependencies are not meant to be copied into an application private location but should remain on the system location they were put in by the respective installer. Copying them to an application private location can and often will cause more troubles than simply leaving them out and you as developer and intimate user of the DLL are the only person aside from the DLL developer who can know which DLLs are needed and which of them are fine to copy and which need to be installed properly.
  17. Why not? The .Net DateTime structure has a property Kind which can have a value DateTimeKind.Utc or DateTimeKind.Local or DateTimeKind.Unspecified. LabVIEW timestamps are internally ALWAYS the number of seconds since midnight Jan 1st, 1904 UTC. So LabVIEW properly translates the .Net DateTime structure into its own Timestamp format and takes care about doing the proper translation depending of the DateTimeKind value in that structure. If you want to display UTC in the LabVIEW control you have to change the DisplayFormat of that control accordingly, not change the internal timevalue of the Timestamp. I find it cleaner to change the property of the display element (or toString() method) than maintaining all kinds of extra flags with the timestamp itself although that does have some implications when you move timestamps between timezones. On the other hand maintaining also the relative timezone properties with each timestamp, while being more flexible, also requires a lot more data storage and all kinds of special case logic.
  18. I'm afraid I was seeing ghosts. I remember having seen some OpenSSL libraries in the past on a cRIO but currently can't come up with any in the one I have nor in the RT Image files that are used to deploy components to a RT system. Found it! It's the nissl component. And the libraries are renamed nilibeay32 and nissleay32. And it seems 0.9.8i. Not really very new.
  19. With such comparisons you should watch out for the order of the comparands. The way you wrote it I would definitely not agree with. I'm not a Linux fan, but that comparison is certainly not very fair. Linux had full featured OS support at a time when Windows still was mostly a crashy UI shell on top of an antique floppy disk manager. ZLIB is not the problem. But lvZIP also incorporates the minizip extra code to support the ZIP format. ZLIB "only" implements the deflate and inflate algorithme which is used to compress/decompress the actual streams in a ZIP file. ZIP adds an archive management around that. With ZLIB alone you can not create archives, only compressed files. This ZIP code was necessary anyhow, implementing the ZIP code in LabVIEW definitely is an exercise that I would never even have started with. So I had the choice to use ZLIB as standard library, which was in a transition phase at that moment (changing calling conventions on Windows to make it more consistent) and have my own ZIP code wrapper taken from minizip (minizip is an executable not a shared library), or put it all in one shared library for ease of distribution. Since there was already a custom shared library component anyhow the choice was easy. Also the problem about 32 bit and 64 bit transition would have remained. The Call Library Interface does not support seamless change between those two environments if you have any structure parameter containing non flat data. Even opaque pointers are sort of problematic as you end up to either use the pointer sized integer and having to route it as 64 bit integer throughout the diagram, loosing any possibility to prevent a programmer to mis-wire these "pointers" with just about any other numeric in a diagram, or create a C wrapper anyhow at this point. The trick with the LabVIEW datalog refnum doesn't work reliable here since they are only 32 bit on either platform. Another issue I have been trying to work on is that minizip does absolutely nothing to deal with character code pages. As a DOS command line tool that is not so bad since it inherits the ACP_OEM codepage for your country setting and ZIP files are supposed to be ACP_OEM encoded. The same code called from LabVIEW or any other GUI app will use the ACP_ANSI codepage which is also depending on your country setting, containing usually mostly the same extra characters but of course at totally different indexes in the upper 128 codes. Doing this translation on Windows is a call to two WinAPIs to translate between ANSI to UTF16 and then OEM and vice versa. Doing it on Mac are a few more and completely different APIs to translate over the widechar roundabout and that only guarantees that the result is similar to Windows and doing it under any Linux version is simply a total pain in the a$$ since there are about 5 different libraries to do character encoding translation but all come with their own list of secondary dependencies. Doing that all in the C code wrapper is bad enough, but doing it in the LabVIEW diagram is simply an exercise in vain. vxWorks doesn't come with openSSL out of the box but several NI tools such as the webserver will install an openSSL library (that NI supposedly cross compiled for this purpose). The one problem with this is that you have no real control about which OpenSSL version gets installed, which can be a serious problem when you want to use certain features. Just about every OpenSSL client sooner or later tends to check for the OpenSSL version to disable some functionality based on lack of feature support, or to do rather complicated circumventions to work around specific bugs in certain versions. A nice exercise to add to a LabVIEW wrapper too!
  20. You don't use Cins anymore and consequently also not cintools. The only things in that directory that are still needed are the extcode.h file (and it's support headers but you usually don't deal with them directly) and the labview.lib file to link to your shared library. Cins aren't even supported on newer LabVIEW platforms such as the 64 bit versions or Linux RT. Shared Library/DLL with the Call Library Node is the way to go for several years already. If you do it right you end up with one VI library without any platform specific code paths and one shared library per supported platform and one C source code for your shared library/wrapper.
  21. We agree to disagree here! I find maintaining platform discrepancies and low level pointer acrobatics on LabVIEW diagram level simply a pain in the a$$ and a few other places too at the same time. It's much easier to maintain these things on C source level and allows easy adaption in a generic way so that the LabVIEW part can concentrate on doing what it is best in and the C part too. I know that the lvZIP is not an ideal example since the 64 bit support is still not released but basically supporting 64 bit there if everything had been done on LabVIEW level itself would be a complete nightmare, now it is basically working out one kink in the cable to allow private refnums to work for 64 bit pointers too. There are two solutions for that, using an undocumented LabVIEW feature that exists since at least LabVIEW 7.0 or cooking up something myself to translate between 64 bit pointers and LabVIEW 32 bit refnums. The real reason that lvZIP still isn't 64 bit however are much more profane aside from time constraints. For one I only recently got a computer with 64bit OS and for the other a change from sourceforge about maintenance of the SVN read/write access which is not natively supported by TortoiseSVN has kept me from working on this for a long time. But an API like OpenSSL or FFMPEG which makes use of complex parameters beyond flat clusters is IMHO simply not maintainable for the long term without C wrappers.
  22. LabVIEW uses on diagram level always 64 bit integers for pointer sized values, since LabVIEW also mandates that the flattened format of every datatype is always the same on any LabVIEW platform. If it would adapt the pointer typed integer to the platform, a cluster containing one or more pointers would be variable sized depending on the plattform you run it on. This has one bad implication: You can in LabVIEW not define a cluster that contains pointers and pass it to a Call Library Node as struct. Such a struct will always mismatch the natively expected datatype either on the 32 bit or 64 bit system. The only solution there is to create both types and call the function with conditional compiling dependent on the platform it is executing on. It is unfortunate but the LabVIEW developers had the choice to maintain the long standing (since LabVIEW 2.5) paradigma of guaranteeing flatten data format consistency across all LabVIEW platforms or allow easy interfacing to external libraries containing pointers in their struct parameters. Considering that the flatten data paradigma exists already over 20 years, and pointers are not really a native feature of LabVIEW anyways, it is easy to see why they did the choice they did. While accessing the interna of AVFormatContext seems indeed required, it is simply a total pain in the ass to do such things in a LabVIEW diagram, especially when you consider the magic you need to do when such structures contain pointers, which they most likely do, and you wanting to support 32 bit and 64 bit seamlessly. This is definitely the point where starting to write a LabVIEW specific wrapper library in C(++) is simply the only useful way to proceed.
  23. You can't use fixed size pointer elements if you ever intend to allow your VIs to run both in LabVIEW 64 Bit and LabVIEW 32 Bit. A pointer is 32 bit when the library is compiled for 32 Bit and 64 Bit when the library is compiled for 64 Bit. The library must be compiled in whatever bitness the calling process is, e.g. 32 Bit for LabVIEW 32 Bit or 64 Bit for LabVIEW 64 Bit, independent of the OS bitness you are running on. One the other hand you do not have 32 bit and 64 bit pointers intermixed in the same process environment. It is either or, never both at the same time. Having to redefine AVFormatContext struct in your own code is definitely a very bad sign. If they are not declared as a complete type in the public headers of FFMPEG then they are not meant to be accessed from outside the library! No exceptions here! The reason being that such types typically will change between different versions of the library and accessing those structs directly will mean that your calling application is not anymore able to deal with a different version of the library.
  24. Don't tell me you are thinking about reimplementing OpenGL on top of the 2D Picture Control!
  25. You are fully right with all of this, except that for opaque data pointers the caller doesn't and shouldn't even have any idea about the size of the structure the pointer is pointing too. Instead such APIs always have a function that will create the according structure and then hand back the data pointer to the caller, and logically an according function that will take that pointer and deallocate any possible resources it refers to including the actual structure itself. For LabVIEW it is in all cases just a pointer sized integer.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.