-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
I'm afraid I was seeing ghosts. I remember having seen some OpenSSL libraries in the past on a cRIO but currently can't come up with any in the one I have nor in the RT Image files that are used to deploy components to a RT system. Found it! It's the nissl component. And the libraries are renamed nilibeay32 and nissleay32. And it seems 0.9.8i. Not really very new.
-
With such comparisons you should watch out for the order of the comparands. The way you wrote it I would definitely not agree with. I'm not a Linux fan, but that comparison is certainly not very fair. Linux had full featured OS support at a time when Windows still was mostly a crashy UI shell on top of an antique floppy disk manager. ZLIB is not the problem. But lvZIP also incorporates the minizip extra code to support the ZIP format. ZLIB "only" implements the deflate and inflate algorithme which is used to compress/decompress the actual streams in a ZIP file. ZIP adds an archive management around that. With ZLIB alone you can not create archives, only compressed files. This ZIP code was necessary anyhow, implementing the ZIP code in LabVIEW definitely is an exercise that I would never even have started with. So I had the choice to use ZLIB as standard library, which was in a transition phase at that moment (changing calling conventions on Windows to make it more consistent) and have my own ZIP code wrapper taken from minizip (minizip is an executable not a shared library), or put it all in one shared library for ease of distribution. Since there was already a custom shared library component anyhow the choice was easy. Also the problem about 32 bit and 64 bit transition would have remained. The Call Library Interface does not support seamless change between those two environments if you have any structure parameter containing non flat data. Even opaque pointers are sort of problematic as you end up to either use the pointer sized integer and having to route it as 64 bit integer throughout the diagram, loosing any possibility to prevent a programmer to mis-wire these "pointers" with just about any other numeric in a diagram, or create a C wrapper anyhow at this point. The trick with the LabVIEW datalog refnum doesn't work reliable here since they are only 32 bit on either platform. Another issue I have been trying to work on is that minizip does absolutely nothing to deal with character code pages. As a DOS command line tool that is not so bad since it inherits the ACP_OEM codepage for your country setting and ZIP files are supposed to be ACP_OEM encoded. The same code called from LabVIEW or any other GUI app will use the ACP_ANSI codepage which is also depending on your country setting, containing usually mostly the same extra characters but of course at totally different indexes in the upper 128 codes. Doing this translation on Windows is a call to two WinAPIs to translate between ANSI to UTF16 and then OEM and vice versa. Doing it on Mac are a few more and completely different APIs to translate over the widechar roundabout and that only guarantees that the result is similar to Windows and doing it under any Linux version is simply a total pain in the a$$ since there are about 5 different libraries to do character encoding translation but all come with their own list of secondary dependencies. Doing that all in the C code wrapper is bad enough, but doing it in the LabVIEW diagram is simply an exercise in vain. vxWorks doesn't come with openSSL out of the box but several NI tools such as the webserver will install an openSSL library (that NI supposedly cross compiled for this purpose). The one problem with this is that you have no real control about which OpenSSL version gets installed, which can be a serious problem when you want to use certain features. Just about every OpenSSL client sooner or later tends to check for the OpenSSL version to disable some functionality based on lack of feature support, or to do rather complicated circumventions to work around specific bugs in certain versions. A nice exercise to add to a LabVIEW wrapper too!
-
You don't use Cins anymore and consequently also not cintools. The only things in that directory that are still needed are the extcode.h file (and it's support headers but you usually don't deal with them directly) and the labview.lib file to link to your shared library. Cins aren't even supported on newer LabVIEW platforms such as the 64 bit versions or Linux RT. Shared Library/DLL with the Call Library Node is the way to go for several years already. If you do it right you end up with one VI library without any platform specific code paths and one shared library per supported platform and one C source code for your shared library/wrapper.
-
We agree to disagree here! I find maintaining platform discrepancies and low level pointer acrobatics on LabVIEW diagram level simply a pain in the a$$ and a few other places too at the same time. It's much easier to maintain these things on C source level and allows easy adaption in a generic way so that the LabVIEW part can concentrate on doing what it is best in and the C part too. I know that the lvZIP is not an ideal example since the 64 bit support is still not released but basically supporting 64 bit there if everything had been done on LabVIEW level itself would be a complete nightmare, now it is basically working out one kink in the cable to allow private refnums to work for 64 bit pointers too. There are two solutions for that, using an undocumented LabVIEW feature that exists since at least LabVIEW 7.0 or cooking up something myself to translate between 64 bit pointers and LabVIEW 32 bit refnums. The real reason that lvZIP still isn't 64 bit however are much more profane aside from time constraints. For one I only recently got a computer with 64bit OS and for the other a change from sourceforge about maintenance of the SVN read/write access which is not natively supported by TortoiseSVN has kept me from working on this for a long time. But an API like OpenSSL or FFMPEG which makes use of complex parameters beyond flat clusters is IMHO simply not maintainable for the long term without C wrappers.
-
LabVIEW uses on diagram level always 64 bit integers for pointer sized values, since LabVIEW also mandates that the flattened format of every datatype is always the same on any LabVIEW platform. If it would adapt the pointer typed integer to the platform, a cluster containing one or more pointers would be variable sized depending on the plattform you run it on. This has one bad implication: You can in LabVIEW not define a cluster that contains pointers and pass it to a Call Library Node as struct. Such a struct will always mismatch the natively expected datatype either on the 32 bit or 64 bit system. The only solution there is to create both types and call the function with conditional compiling dependent on the platform it is executing on. It is unfortunate but the LabVIEW developers had the choice to maintain the long standing (since LabVIEW 2.5) paradigma of guaranteeing flatten data format consistency across all LabVIEW platforms or allow easy interfacing to external libraries containing pointers in their struct parameters. Considering that the flatten data paradigma exists already over 20 years, and pointers are not really a native feature of LabVIEW anyways, it is easy to see why they did the choice they did. While accessing the interna of AVFormatContext seems indeed required, it is simply a total pain in the ass to do such things in a LabVIEW diagram, especially when you consider the magic you need to do when such structures contain pointers, which they most likely do, and you wanting to support 32 bit and 64 bit seamlessly. This is definitely the point where starting to write a LabVIEW specific wrapper library in C(++) is simply the only useful way to proceed.
-
You can't use fixed size pointer elements if you ever intend to allow your VIs to run both in LabVIEW 64 Bit and LabVIEW 32 Bit. A pointer is 32 bit when the library is compiled for 32 Bit and 64 Bit when the library is compiled for 64 Bit. The library must be compiled in whatever bitness the calling process is, e.g. 32 Bit for LabVIEW 32 Bit or 64 Bit for LabVIEW 64 Bit, independent of the OS bitness you are running on. One the other hand you do not have 32 bit and 64 bit pointers intermixed in the same process environment. It is either or, never both at the same time. Having to redefine AVFormatContext struct in your own code is definitely a very bad sign. If they are not declared as a complete type in the public headers of FFMPEG then they are not meant to be accessed from outside the library! No exceptions here! The reason being that such types typically will change between different versions of the library and accessing those structs directly will mean that your calling application is not anymore able to deal with a different version of the library.
-
Don't tell me you are thinking about reimplementing OpenGL on top of the 2D Picture Control!
-
You are fully right with all of this, except that for opaque data pointers the caller doesn't and shouldn't even have any idea about the size of the structure the pointer is pointing too. Instead such APIs always have a function that will create the according structure and then hand back the data pointer to the caller, and logically an according function that will take that pointer and deallocate any possible resources it refers to including the actual structure itself. For LabVIEW it is in all cases just a pointer sized integer.
-
Well let me tell you about the lvZIP library on OpenG. That is magnitudes simpler than the FFMPEG library in terms of number of exported APIs as well as functionality. And creating that was more like a one or two year parttime project than a weekend project . And I knew some substantial C programming when I started with that, although I sure learned a few tricks during development of that library too. So even with an optimistic outlook I would consider the FFMPEG library to be a substantial full time project that likely takes up several months to do right. As to your specific problem: those AVStream and other parameters are so called opaque data types. Basically that means that the external declaration is simply a named datatype with no public data content. As such LabVIEW can not do anything with this datatype and the import wizard has to leave the not fully declared datatype undefined. Even if you go into the header file and add something to the structure to make it parsable you are really misguiding the import wizard as to what it should do. The only kind of fix that might sort of work would be to define the AVStream and others to be of type void instead of an unfinished struct. Then AVStream* will get void* which is actually what you want to have here. The idea about opaque types is that they are in the external API only treated as a pointer to an undefined struct. Much like what Windows uses HANDLEs for. They are basically a pointer to a data structure that the actual API knows how to interpret, but the caller never should be concerned about in any way. By making it an opaque type the programmer can make sure that any caller will never be able to concern itself with the contents of that structure. Internally in the library source code there is another header which declares the datatype completely so it can be used inside the library properly. The advantage of using AVStream* and friends as seen here instead of declaring every of these types as void* instead, is that you as caller can then not accidentally pass a pointer that describes a different resource than an FFMPEG stream to this function. Well that applies for C(++) only of course. In LabVIEW you don't have that automatic syntax check based on the C header declaration. Personally I would probably redefine all those opaque types to void and then let the import wizard correctly create the library calls. Here they should then end up as a pointer sized integer. You end up with the problem that you loose every type safety as a pointer sized integer that points to an AVStream is not distinguishable from another pointer sized integer that points to some other resource. But solving that issue is even more involved and definitely will go way beyond any type of weekend project.
-
Under Windows there is a special root loop in LabVIEW that registers a LabVIEW Window Class on startup. That supposedly fails when it is already registered and in old days LabVIEW simply passed control to that other instance and then terminated. The allowMultipleInstances ini key is a fairly new addition (well around LabVIEW 7 or so) that seems to make LabVIEW not terminate on RegisterClass() returning an error but instead simply continues. Under Linux there is no such mechanisme and it is even more unusual for an application to not allow being started more than once. They would have to add some internal IPC mechanisme to check for that on startup and I'm sure that never really has come up so far and even a Product Suggestion has a very low chance to ever make it into LabVIEW. allowMultipleInstances most likely is nowhere present in any non-Windows version of LabVIEW. However, at least in Linux (and I suppose on OSX too) you can easily do this by creating a startup shell script for your LabVIEW application and check in there with shell commands for the existence of another instance of the same app. Look here for a fairly simple possibility to do that.
-
As SDietrich mentioned, the pressing of a button can be intercepted by pop-up windows of any other application in the system. A slightly more involved possibility which still would work from vbscript would be to enable the ActiveX interface of the LabVIEW application and then control the application explicitly by invoking ActiveX methods of the LabVIEW server. This post is part of a thread that discusses the possibility to control a LabVIEW VI from vbscript and the post contains even a small sample.
-
LabVIEW Memory Manager Functions
Rolf Kalbermatter replied to GeorgeG's topic in Calling External Code
First, the distinction between AZ and DS memory space has long ago been removed. The functions remained for backwards compatibility but really allocate exactly the same handle. Second you shouldn't try to do this if you don't have a very deep understanding of C memory pointers. I can give you a principle idea, but can't do all the work for you. You will likely have to tweak this code and it may even contain syntax errors as I'm writing it just from memory without any compiler check. So watch out and reconsider if you really want to go this path. /* Call Library source file */ #include "extcode.h" /* lv_prolog.h and lv_epilog.h set up the correct alignment for LabVIEW data. */ #include "lv_prolog.h" /* Typedefs */ typedef struct { int32_t Numeric; LStrHandle String; } TD2; typedef struct { int32_t dimSize; TD2 Cluster[1]; } TD1; typedef TD1 **TD1Hdl; #include "lv_epilog.h" MgErr funcName(TD1Hdl *data); MgErr funcName(TD1Hdl *data) { MgErr err; int32 i, strLen, arrLen = ...; if (*data) { if ((**data)->dimSize > arrLen) { // Deallocate all contents in Clusters that get removed when resizing the array down for (i = arrLen; i < (**data)->dimSize, i++) { LStrHandle *str = &((**data)->Cluster[i].String); if (*str) { DSDisposeHandle(*str); *str = NULL; } } } err = DSSetHandleSize(*data, sizeof(int32) + arrLen * sizeof(TD1)); } else { *data = DSNewHClr(sizeof(int32) + arrLen * sizeof(TD1)); if (!*data) err = mFullErr; } if (!err) { TD2 rec = (**data)->Cluster; for (i = 0; i < arrLen; i++, rec++) { rec->Numeric = i; strLen = strlen(sourceString); err = NumericArrayResize(uB, 1, (UHandle*)&rec->String, strLen); if (err) return err; MoveBlock(sourceString, LStrBuf(*(rc->String)), strLen); LStrLen(*(rec->String)) = strLen; } (**data)->dimSize = arrLen; } return err; } This should give you a start. Please note that I did try to make some provisions to properly deallocate internal resources if the array resize would result in a shortening of the incoming array. However the error handling is not perfect at all. Currently if the code errors out because of anything then the array could actually stay in an inconsistent state, resulting in memory leaks, since LabVIEW wouldn't know about the already newly allocated array records. However in this example this is not really a big problem as the only errors that could happen are memory manager errors and once such an error has occurred you'll have to restart LabVIEW anyways. But if you add your own code in there to actually retrieve the data from somewhere you might run into more norrmal errors and just bailing out like this in the array fill in loop could have really bad consequences in terms of memory leaks. -
A shared library seems like a bad idea to start as self contained background application (deamon). Instead you should simply create a headless executable and add it to your boot sequence. How you can add an executable to the boot sequence depends a bit on your specific Linux distribution and version but you usually add a script that calls your executable somewhere in /etc/rc.d/, /etc/rc.d/init.d, or /etc/init.d, depending on your Linux version. This script is a shell script that takes certain parameters like start and stop and translates that into calls to your executable. Theoretically you could even program your LabVIEW program to simply take those command line parameters and execute accordingly and then just add a symlink to your executable to the right sysinit location. Your LabVIEW deamon really is a headless system as it will normally run directly in the system without any GUI. In order to allow user configuration you would either have to allow some IPC communication between your UI component and your deamon, such as a little TCP/IP server in your deamon program. You don't even have to write that server yourself but could enable the VI server in the LabVIEW application and simply call in the deamon some VIs that update its internal state through the Call By Reference method of VI server.
-
Just a tiny bit more involved really! I just received my result back of my own CLA certification and passed, although barely. The comments on the sheet about what all still would need to be implemented simply sounds ridiculous to be finished in anything near to 4 hours. So prepare with the sample exam, and make sure you get your basic framework for that done in the shortest possible time. I would say if you can get your basic framework with skeleton VIs for all the sub units like GUI, Error handler etc. within less than an hour then you are more or less ready to take the exam. After that hour you can start to stamp all the requirement tags in the different VIs and start to fill in the raw diagram structures according to the requirements. One advice: rather than trying to implement real code, describe what needs to be done in text inside the various structures as much as possible. That will save you some time. And yes, if you are able to power create your basic framework without much thought, you can recreate that for the real certification almost blind, with some minor variations.
-
Application Task Kill on Exit?
Rolf Kalbermatter replied to hooovahh's topic in Application Design & Architecture
If "End Process" doesn't work, your application is defininitly doing something VERY low level. Do you call some drivers that use kernel device driver somewhere? Except DAQmx of course, which definitely does have such drivers, but hasn't behaved like this on me so far. Other 3rd party vendor drivers however have done such things to me in the past. Unless your application is stuck in kernel space, "End Process" really ought to be able to kill the process cleanly. -
You can't dismiss the device itself. Some device may have an internal timout to free up resources which means they will actively and willfully drop any connection that has not seen any data transmission for a certain amount of time. While your desktop computer has a sea of memory and CPU power to process many parallel TCP/IP streams at the same time, your embedded device is typically much more resource constrained. They may not even be able to serve more than one endpoint at the same time, so any host trying to connect to them and leaving a connection hanging could block the device for anyone else entirely. That may be a security feature, by locking out other network access while you do some sort of transaction on the device that needs to be uninterrupted by anyone else, but if the device would allow infinite time for a connection to stay open, any misbehaving application could block the device totally for anyone else. So check your documentation! Most likely there is some info as to how long a data connection can stay open inactive before the device will close it from its own side, which is in fact the most common reason for error 66. Another possible reason for this error could be that the device detects some errors while serving the connection, either misformed data packets or some network protocol error due to for instance your bad network card or noisy connection and simply drops a connection on any such error. It is the safest thing to do in network communication: If there is any error during processing of the network connection, close it immediately and let the client reconnect. Your client software should be able to cope with that by using the information returned in the error cluster. Generally a robust network client would treat error 56 as an indication to retry the last operation one or more times (but not indefinitely) and if the error 56 persists or any other error occures, close the connection and attempt to reopen it.
-
The ref num goes around the loop but the error cluster goes through the loop and then to the two Close Refnum nodes. At least that is how it looks in the picture. Maybe the real wiring is behind the loop but that would be a major style fault. Unfortunately the VI Snippet resource seems to get lost between posting to LAVA and downloading it to my computer, so can't really check it.
-
Shouldn't really happen, since the error cluster is wired through (at least it looks like it is, didn't check in the actual code).
-
Well, the code you claim causes problems in LabVIEW is commented out in the VBA code. So are you sure it does anything else in VBA than in LabVIEW if you enable the code?
-
Other controls?? I'm pretty sure the Picture Control is still about the only one which allows that! Also note that its background is light blue, indicating that it is considered an Advanced (scripting) Feature. Also graphs do have a Cursor property, since about LabVIEW 4 or so but that is for something very different!
-
That is not a correct analysis and the solution is only one way to solve the problem. The real problem is not a relative path but simply the fact that your DLL depends on another DLL. No path is stored in your DLL to the other DLL, just its name. Windows when asked to load a DLL will scan the entire import list of that DLL and load any dependency of that DLL too. For that it looks in following locations: 1) It looks if the current process has already loaded a DLL with the same name, if so it is simply linked to the imports of the DLL currently loaded 2) It then looks in the same directory as the executable file of the current process 3) Next it will look in the <System> directory 4) Then in the 16 bit <system> directory 5) Then it will look in the <Windows> directory 6) Next is the current directory (that is the directory where the latest operation was done from inside the current process, that can be the double click to start the application or any file dialog box that is dismissed with anything but Cancel is selected, or an explicit call to the SetCurrentDirectory() API) 7) Last but not least it will look in any directory listed in the PATH environment variable In Windows XP and earlier point 6) is by default placed between 1) and 2) placing your secondary DLLs into the LabVIEW.exe directory is the safest location, but gets quickly a mess if you have more than one or two such DLLs. It is also a hassle to remember to include those DLLs into your application executable build and make sure they get into the root directory of your application. The proper solution is to have an installer for your DLL that takes care about installing all dependencies correctly too, and that installer should be created by the supplier of your DLL.
-
Getting a pointer to a variant's data in C
Rolf Kalbermatter replied to martijnj's topic in Calling External Code
And it should definitely do make you feel uneasy! Doing that is a guaranteed recipe for disaster somewhere down the road. Even using the undocumented LvVariantGetDataPtr() function and friends is only slightly better for anything that you intend to have used by other people. Since it is undocumented, NI is free to change its prototype, semantics, or discontinue it altogether whenever they decide so. Yes they would have to review several other internal projects to adapt to the new interface which is a hassle and therefore they will think three times about doing that, but it is possible and manageable as they have all the control over every piece that makes legitimate use of this function. Once the function is publicly documented, it is carved in stone for eternity and no changes in any way are really possible anymore. You may understand the implications of using such a function in your code, but if you ever distribute your library to other people they will most likely have no clue at all about the entire C code interface in general and the use of an undocumented LabVIEW API in special and with a new LabVIEW version everything may fall to pieces for them. The proper way to handle this would be to get someone from NI give you a (semi)-official document that does document these functions. If you can manage to get that, then you can be somewhat confident that the LabVIEW team feels fairly safe about this API to be a standard for many versions of LabVIEW to come. It still can break for many reasons in the future, such as modifications to the LabVIEW source code to support new hardware architectures or such, but at least you have some assurance that someone in the knowing believes this is a permanent API. -
The sources for all OpenG libraries are on sourceforge. There is a project for the "OpenG Toolkit" and a separate project for "LabPython", since LabPython predates the OpenG Toolkit by a few months. The first search result in Google points to the LabPython home page hosted on sourceforge and has links to the the sourceforge project page where you can go to the code section and see that it contains both the LabVIEW and C code. It's still in CVS but there is an option to download the entire repository as GNU tarball, so no need to install a CVS client if you don't want to. As to the reason why it may crash, when you add ROOT to your python project, there are many possibilities. LabPython was developed with Python 2.3. It seems to work fairly well for most people with Python versions < 3.0, but there seem to be problems with certain Python libraries that contain binary components (C(++) DLLs compiled as Python modules). The reason is probably some version conflicts in runtime libraries between LabVIEW, the LabPython DLL and those binary Python modules. Hacking your own ROOT script interface based on LabPython might be a possibility but unless you are really deep into C programming I wouldn't recommend it.
-
1) is officially not only not supported by Apple but according to them fully illegal and actively frustrated. 2) you need to buy a Mac license 3) and further are in principle correct but because of 1) difficult and illegal to do
-
Actually most modern text code IDEs have nowadays auto-intend with configurable code style rules so there certainly exists a "make code look nice" function there, although it is usually a menu and not a button.