Jump to content


  • Posts

  • Joined

  • Last visited


Paulus's Achievements


Newbie (1/14)



  1. QUOTE - I realised I'd made one mistake - I was not using __stdcall at all (don't know where I got that idea from) that should have been __cdecl Thanks for the explanation of the underlying mechanics, that makes things a lot clearer. Also the dependency walker proved to offer essential feedback. I could fiddle with Visual Studio and through trial and error was able to get DLLs which work in release mode. :thumbup: As I was also including some code from static libraries that I had built, there may have been some effect from these in some cases too - complicating the picture. (obviously not in the simple example case above which was not working) For reference I summarise here those Visual Studio settings which make everything work in Release mode, in the hope that it helps others similarly stuck Assuming: You have code which you first build into static libraries which will then be used by the DLL you ultimately wish to create You are using MS Visual Studio 2005 For static libraries (which one might have anyway) and later incorporate into DLLs Set run time library to Multi-Threaded (Properties:C/C++:Code Generation:Runtime library) to Multi-Threaded (/MT) Set calling convention (Properties:C/C++:Advanced:Calling convention) to __cdecl(/Gd) Embed Manifest (this ensures standard windows bits are present) (Properties:Manifest Tool:Input and Output:Embed Manifest) to YES For DLL projects a similar set of settings were applied Set run time library to Multi-Threaded DLL (Properties:C/C++:Code Generation:Runtime library) to Multi-Threaded DLL (/MD) Set calling convention (Properties:C/C++:Advanced:Calling convention) to __cdecl(/Gd) Make sure linker generates manifest (Linker:Manifest File:Generated Manifest) to Yes Embed Manifest (this ensures standard windows bits are present) (Properties:Manifest Tool:Input and Output:Embed Manifest) to YES Enable incremental linking (Properties:Linker:General:Enable incremental linking)Yes(/INCREMENTAL) I had a few other settings but I believe these are optional (and specific to what I wanted) and probably nothing to do with fixing my earlier problems, included in case they are relevant Build in Release mode (optional) Switch off 64 bit issues warning (Properties:C/C++:General:Detect 64-bit Portability issues) to No (If you are using maths library constants like M_PI) add preprocessor definition (Properties:C/C++:Preprocessor:Preprocessor Definitions) _USE_MATH_DEFINES Set optimization (Properties:C/C++:Optimization:Optimization) to Maximise speed (/O2) Set optimization (Properties:C/C++:Optimization:Favor Speed or size) to Favor speed (/Ot) Set optimization (Properties:C/C++:Optimization:Whole program optimization) to No Disable C++ exceptions (Properties:C/C++:Code Generation:Enable C++ exceptions) to No Set compile as (Properties:C/C++:Advanced:Compile as) to Compile as C code (/TC) Ensure not using Precompiled Headers (Properties:C/C++:Precompiled Headers:Create/use) to not using...
  2. Dear all thanks for the helpful comments and advice to answer the last few questions I haven't been including "extcode.h" although I recognise it from older projects when I used CINs nor have I been making any effort to set 1 byte alignment either with a pack pragma (which I would not have known about) or with a similar setting on the Visual Studio project options under Project Project settings C/C++ Code Generation Struct Member alignment I found that using function parameters like double *an_array, int dim_size_passed_separately rather than the CIN style array handle worked without any issues or needing any alterations to byte alignment, packing or anything else and therefore I opted to pass arrays using pointers. No problems detected so far in that regard
  3. Dear all, this is quite a general question about DLLs created in MS Visual Studio and the effects of building the DLL under Debug or Release. I'm not so much trying to solve a specific case, with specific details, more to generally understand something of the full set of rigorous steps necessary to guarantee that at least a LabVIEW VI sitting on a windows machine (running Windows XP) will be happy to load a DLL. I find that for very simple DLLs (such as the example C code included at the end of this text) LabVIEW v 8.2 on some machines is happy to load the DLL and run it whether I build it under Debug or Release whereas on another machine ostensibly running the same version of XP and also LabVIEW v8.2 the Debug version of DLL will be rejected by the (same) VI with the message (when exiting the DLL configuration dialog - after right click on the DLL icon on VI diagram) Error loading "C:\blah_blah_debug.dll". This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. whereas the Release version of the DLL will be accepted and will run. This was particularly confusing, because in previous experience of using (admittedly much larger DLLs which usually include other code from static .lib files written by other people - or myself) on yet a third PC, labview has often rejected a Release version of a DLL and yet has apparently been happy with a Debug version. The Release version on such occasions is rejected with the following message (during VI running) Error 13 occurred at dll_name.dll Possible reason(s) LabVIEW: File is not a resource file I had a look at the DLL documentation for LabVIEW and it mentions compiling and building under both Debug and Release so I don't understand enough about what actually happens under the bonnet (hood in US speak) to know what to do about it. I have heard a lot of mythological possiblities so some concrete knowledge from experts would be very useful. Some points for guidance: I always write in pure C, no C++ anywhere in my code. I therefore do not not use extern "C" statements - since I am compiling the code with the Visual Studio option "compile as C (/TC)" switched on in all cases. I use the _stdcall convention with the __declspec(dllexport) token I do not use DEF files (this might be my downfall - but according to what I've read I shouldn't need a DEF file if I use __declspec(dllexport) ) Any similar experiences? Help appreciated Below are the simple example .c file followed by the .h file #include"DLL_test.h" EXPORT_DLL int testFunction(int *value_in_out, int value_to_add) { int in = value_in_out[0]; value_in_out[0] = in + value_to_add; return 0; } with a header file like this #ifndef __DLL_TESTING_NOV_08_TOKEN_ #define __DLL_TESTING_NOV_08_TOKEN_ #define EXPORT_DLL __declspec(dllexport) EXPORT_DLL int testFunction(int *value_in_out, int value_to_add); #endif
  4. QUOTE (Chris Davis @ Oct 23 2008, 04:11 AM) Ok thanks for that. I agree with the idea of not trying to resize arrays inside the C code, I used to do it in CINs calling the NI functions for that but it is a hairy operation best avoided if one can lazily get LabVIEW to provide the arrays with space already allocated in advance. As for passing arrays by pointer, with a 2D array what gets passed? I assume it is a pointer to the start of the 1D array which only looks 2D to labview in other words I assume that the "elt" pointer gets passed over to the C code After my original post I did a bit more investigation into the way the 8 bytes representing the DBL were arranged. If the bytes in one representation were arranged 8 7 6 5 4 3 2 1 then I found the other representation ordered them 4 3 2 1 8 7 6 5 in other words, they were reordered at the WORD (4 byte WORD) level and using a bit of extra code along these lines double *d_ptr; d_ptr = (double*)((*arrayHdl)->elt); int *a; int *b; int temp; a = (int*)(d_ptr); // set a to first word of the double b = a; b++; // set b to second word temp = a[0]; a[0] = b[0]; b[0] = temp; I could convert the double between LabVIEW readable format and C readable format and as one might expect from the symmetry of the situation, the same operation converts back this explains why the 4 byte integers were not troubled by entering and exiting DLLs via the array handle I think your method of access will be better (for 8 byte doubles) as I do not plan to run every array through such a WORD swapping procedure coming in and going out of a DLL, element by element Thanks for the info
  5. Hi there, I'm currently trying to access some functions written in plain old C, compiled on Windows under Visual Studio 2005, from LabVIEW (8.2). The C code is written in static libraries, which I have called from a DLL. On debugging strange behaviour of the DLL and noting comments written in posts on LAVA and elsewhere it appears that passing an array of LabVIEW type DBL to a DLL via the Array handle type after defining the type in a header used by the DLL with code, typically something like this typedef struct { int size; double elt[1]; } dblArray; typedef dblArray **dblArrayHdl; I find that double values passed from LabVIEW via the inner array (using a C pointer hooked up to the elt pointer) are (inside the C program) completely incorrect values. The same is true vice versa for numbers originating in C and propagating out to the LabVIEW via DLL outputs. The standard explanation given for this nightmare, is that LabVIEW uses big endian and windows (hence the C) are trying to interpret using little endian. Fine all understood so far, but if this is true, can anyone explain why integer types are not similarly mangled. when the array type is say int32 (labview) to int or long (inside the c) I find integer values entering and leaving the DLL and being correctly represented inside the C code, with no attempts to correct the endianness from LabVIEW to C or vice versa. If endianness being opposite is the explanation for doubles, surely - unless I accidently chose integers which happen to be endianness palindromes (ie look the same in each direction) - the effect should create problems for integers as well? Doesn't anyone else find this bizarre? It was the very fact that integers survive the passage to and from DLLs which originally lead me to naively believe that the DLL interface (and user configuration settings) were doing the heavy lifting for me and were converting the LabVIEW types to C types and back - at least for integers and so I naively assumed the same would happen for doubles as well Can anyone explain the apparent inconsistency?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.