Jump to content

How to call a dll that has an ENUM definition


Recommended Posts

I'm calling a dll that has an ENUM in the definition. I've included a snippet of the code example they sent for Visual Basic. What I can't figure out is what size to declare for the ENUM. Is it 8, 16 or 32 bits?

' Set Measurement Mode

Public Enum LKIF_MEASUREMODE

LKIF_MEASUREMODE_NORMAL ' normal

LKIF_MEASUREMODE_HALF_T ' translucent object

LKIF_MEASUREMDOE_TRAN_1 ' transparent object

End Enum

Public Declare Function LKIF_GetMeasureMode Lib "LkIF.dll" (ByVal HeadNo As Long, ByRef MeasureMode As LKIF_MEASUREMODE) As Long

George

Link to comment

QUOTE (ned @ Apr 3 2008, 04:36 PM)

According to http://msdn2.microsoft.com/en-us/magazine/cc163568.aspx' rel='nofollow' target="_blank">this MSDN page, "If no underlying type is explicitly declared, Int32 (Integer in Visual Basic) is used implicitly."

That's true only on a 32-bit system. I'm pretty certain that on a 64-bit system, the default would be int64. And that's only with the MSVC++ compiler. I'm pretty sure that the c++ standard doesn't require any particular standard when the size isn't defined, so some compilers may not conform to this. You'd need to know the compiler used to build the DLL.

Link to comment

QUOTE (Aristos Queue @ Apr 4 2008, 02:38 PM)

That's true only on a 32-bit system. I'm pretty certain that on a 64-bit system, the default would be int64. And that's only with the MSVC++ compiler. I'm pretty sure that the c++ standard doesn't require any particular standard when the size isn't defined, so some compilers may not conform to this. You'd need to know the compiler used to build the DLL.

Actually Standard C uses normally the smallest integer that can contain the highest valued enum.

Maybe C++ changed that in favor for the int dataype.

So

typedef enum {

zero,

one,

two,

three

};

will usually be an int8

To force a specific int size one often defines a dummy value:

typedef enum {

zero,

one,

two,

three,

maxsize = 66000

};

will make sure it is an int32

Rolf Kalbermatter

Link to comment

QUOTE (george seifert @ Apr 4 2008, 10:09 AM)

Nice to know it's mostly guess work when it comes to ENUMs. I tried guessing Int32 and it worked. Thanks.

It shouldn't need to be guesswork; VisualBasic provides a way to explicitly specify the representation, using the "Public Enum [Name] as [Type]" syntax. Since the VisualBasic enum definition you provided doesn't specify a type, it's safe to assume it's an I32.

Link to comment

QUOTE (ned @ Apr 7 2008, 10:08 AM)

Since the VisualBasic enum definition you provided doesn't specify a type,

DOH! Everyone on the thread just assumed that text meant C++. I didn't even really think about the syntax there -- just assumed you had some interesting macros. Important tip: When posting code from a non-LV language, make sure to tell everyone really loud which language it is you're posting! :-)

Link to comment

QUOTE (rolfk @ Apr 7 2008, 03:30 AM)

Actually Standard C uses normally the smallest integer that can contain the highest valued enum.

Maybe C++ changed that in favor for the int dataype.

So

typedef enum {

zero,

one,

two,

three

};

will usually be an int8

To force a specific int size one often defines a dummy value:

typedef enum {

zero,

one,

two,

three,

maxsize = 66000

};

will make sure it is an int32

There is actually one other aspect here that is important. While C and I do believe C++ will use the smallest integer that can hold the biggest enum value, there is also something called padding. This means skalar elements inside a struct will be aligned to a multiple of the element data size or the data alignment specified through a #pragma statement or passed to the C compiler as parameter, whichever is smaller.

So in the case of above enum type which would result in an int8 and following structure

struct {

enum one_three elm;

float something;

}

"something" will be aligned to a 32 bit boundary with all modern C compilers when using the default alignment (usually 8 bytes).

So the C compiler will in fact create a struct containing an 8 bit integer, 3 padding filler bytes and then a 32 bit float. Treating the enum as int32 in that case will be only correct if the memory was first initialized to be all 0 before the (external code) filled in the values and also only on little endian machines (Intel x86).

Rolf Kalbermatter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.