-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
To add to your original question: No the Twain API is not trivial to call from an environment like LabVIEW. It is based on a very old Windows 3.0 paradigma that communicates through the applications windows event queue with the client and this windows event queue is pretty well hidden from the normal LabVIEW level where you normally operate inside VIs. For a big part this is because each LabVIEW platform has a fundamentally different window manager interface that had to be handled in LabVIEW in a consistent manner for all platforms. So this sits deep in the guts of LabVIEW and is only with some low level Windows API magic accessible in a way that allows to handle the Twain messages properly. I developed in the past such an interface for some internal projects and it worked, but is still quite a stretch from something that could be released to the greater public without risking disappointment by many developers who would go about using it. It worked for me because I knew how to call it (and what not to try to do) and because I could go into the C code and debug the issues if any surfaced. But it was a very painful debugging experience since if you try to single step into those parts of window event handling you are generally creating all kinds of hard deadlocks and race conditions on both the standard Windows event handling level as the more specialized LabVIEW windows manager layer.
-
Well, it is quite possible that the first section of the structure needs to be filled in with specific values that tell the function what to return in the union and for what resource (device, subunit, or whatever). So having even one value off might simply cause the function to error out. Have you checked the function return value itself to not indicate some error condition?
-
Well the sval is not a pointer but a fixed size string or byte array and as such must be inlined in the structure. Your bdrbag_byte_cluster.ctl is as such the most accurate control to use. However it only matches the Visual Basic definition not the original C definition as in there it is really 1500 bytes long, not just 127. As long as you are sure that the underlying function is not trying to write past byte 127 there won't be a problem though. All the other typedefs are not suited to resemble the C structure declaration in any way. And candidus, alignment is not an issue for this particular structure. The alignment rule specifies that each structure element is aligned on the smaller value of either the integral element size or the alignment value. Here all numerics are 32 bit sized and align therefore automatically on their natural position and the string has an integral size of 1 byte and has therefore no alignment requirement.
-
Do we need a pointer sized integer type?
Rolf Kalbermatter replied to mje's topic in Calling External Code
Theoretically there could be some use, making the conditional compile structure unnecessary, but!! It would violate a very standard paradigm that LabVIEW has kept intact since its inception as multiplattform development system: A flattened datatype is on all systems the same format! Either that or the Flatten function would have to treat the special pointer typed datatype everywhere as 64 bit entity (and we would have to hope that the 128 bit pointers are far enough into the future that this wouldn't be obsoleted at some time or require a new large pointer type for the whole purpose of maintaining the flatten format consistent. Personally I find this anyhow rather academical, since if you start to deal with API calls with such parameters the time is ready to write an intermediate shared library which translates between this type of structure and a more LabVIEW friendly parameter list. In there the compiler will typically take care of any target specific bitness issues automatically (with some care when writing the C code to not introduce bitness troubles) and the LabVIEW diagram stays clean and proper for all platforms. -
The sourceforge repository for OpenG uses SVN which does not have pull and push requests like GIT. So the administrator of the OpenG Toolkit project, which I believe is still Jim Kring, would have to add your sourceforge ID to the project before you can actually commit anything to it. Anyone with commit rights can change anything about the OpenG Toolkit so it is not a right that everybody out there should have. In general it is anyhow preferable to have some discussion beforehand about proposed changes before committing anything to the repository.
-
C#/Measurement Studio and TCP with LabVIEW
Rolf Kalbermatter replied to GregFreeman's topic in Calling External Code
It's not really that difficult to stream data over a TCP/IP connection and in fact it is a bit more trivial in LabVIEW than in C(++)(#) but even there it is doable. You need to take care about byte order, which is big endian if you use the native LabVIEW flattening functions and also padding but in general LabVIEW is packing data as much as possible, except for booleans which are sent as byte. So you (or your collegue) will likely want to use some sort of C# library that allows to send data in big endian form over a stream. Most likely you will need to have some specific code on the C# side to flatten and unflatten the structures into and from the stream. Writing a general purpose library that can flatten and unflatten any form of structure into stream is most likely to much of a hassle, also because C# doesn't really know something like a cluster but uses classes for everything. So there is not a strict order in memory like for a structure in C. You could of course use create a library that uses reflection in C# to stream arbitrary structure classes on a wire but you have to be very careful that the order of elements in the class definition would stay consistent with what you use on the LabVIEW side in the cluster. -
I think the best approach is to actually post your proposed fix here for discussion. There will usually be some discussion about it and if it is considered useful and also doesn't break existing applications significantly, it is quite likely that it gets included in the next release, whenever that might be. But significant improvements to the code as it is in the sourceforge repository certainly warrant the effort to go through the hassles of releasing a new package. Unfortunately the current procedure about who does a new release is a bit unclear. I lack the detailed knowledge about the actual release procedures and also the time to commit to doing this on a regular base. Jonathan Green did a great job in releasing new packages when there was something to release but his focus has been shifting to other areas lately, which is unfortunate but of course his full right. But getting the discussion started about a proposed fix is for sure the first step anyways.
-
I wouldn't count on that! While it is theoretically possible, the implementation of many LabVIEW nodes is on a different level than the actual DFIR and LLVM algorithmes that allow for dead code elimination and invariant code optimizations. The LabVIEW compiler architecture does decompose loops and other structures as well as common functions like some array functions into DFIR graphs but other nodes such as File IO (Open Read, Write, Close) or most likely also Obtain Queue are basically mostly just calls into precompiled C++ functions inside the LabVIEW kernel and DFIR and LLVM can not optimize on that level.
-
Drift between Date/Time and Tick Count (ms)
Rolf Kalbermatter replied to mwebster's topic in LabVIEW General
You forget to take into account that almost all systems nowadays are set to synchronize their time to a time server (if they have any internet connection). This means that the RTC is periodically adjusted to whatever the configured timeserver reports (and smarter algorithms will attempt to do this adjustment incrementally rather than in one big step). The timer tick however is simply an internal interrupt timer derived from the crystal oscillator that drives the CPU. This oscillator is not very accurate as it really doesn't matter that much if your CPU operates at 2.210000000000 GHz or 2.21100000000000 GHz. The timer tick is used to drive the RTC in between time server synchronizations but not more. -
error 1097 means that the function somehow run into a problem like addressing invalid memory. This could happen if you pass a pointer to the MoveBlock function that does not point to memory long enough for the function to work on. For instance if you would pass an LabVIEW String which has been preallocated with 1000 characters and tell it to copy 2 * 1000 = 2000 bytes into it because the source is in Unicode. Or you forgot to preallocated the output buffer altogether or you do some other miscalculation when calculating the number of bytes to copy into the target pointer.
-
Please mention in the future if you do a crosspost as that helps potential people who want to answer, to see if they can add anything beneficial that hasn't already been answered in the other thread. There is no way to create a LabVIEW VISA resource in your calling C code AFAIK. The VISA resource is an internal LabVIEW datatype handle that contains both the underlaying VISA session as well as the name of the VISA Resource as used to open that resource and possibly other LabVIEW internal information. The only entity that knows how to create this resource is in the LabVIEW kernel. But your C application does not link to the LabVIEW kernel at all, and there is no easy way to determine which LabVIEW kernel your DLL uses to link to it. Even if you knew how to link to the LabVIEW kernel (runtime engine) that the DLL uses, there are no documented function in there to deal with VISA resources. Basically what you want to do is doomed. Instead you should replace the VISA resource Name control by a string control. Then you can configure it as string in the DLL interface and treat it as such in your C code. The only possible drawback is that in the DLL VI the VISA resource will be looked up every time you call that function, as the LabVIEW kernel will have to find any already opened session with that name or create a new one if it doesn't yet exist. A little search magic on the NI forum would have given you that answer too.
-
Problems with calling a DLL
Rolf Kalbermatter replied to Nut Buster's topic in Calling External Code
Yes, you are missing the fact that the two arrays in the struct are inlined and not a pointer. Even if it was a pointer, the LabVIEW array handle is NOT equivalent to a C array pointer and neither to an inlined array. What you need to do is replacing your arrays with a cluster of 100 U8 elements and 6 U8 elements respectively. Creating a cluster constant with 100 elements is rather inconvenient and time consuming. You best create these by converting an U8 array constant to the according cluster with the Array to Cluster primitive and then not forgetting to set the cluster size by right-clicking on the node and selecting the "Cluster Size ..." dialog. Then assemble the cluster/struct with a Bundle node. -
They of course can't do that out of the box. A VI in one application context does share absolutely no data with the same VI in another application context out of its own. That is even true if you run the FGV in two separate projects on the same computer and even more so if they run on different devices or different LabVIEW versions. As Jordan suggested you will need to implement some real interapplication communication here, either through network variables (most easy once you got the hang of it how to configure them and deploy them to the targets), or your own network communication interface (my personal preferences in most cases). There are also reference design projects for cRIO applications such as the CVT (Current Value Table) and accompagning CCC (CVT Client Communication) that show a possible approach for implementing the second solution.
-
Telco's have indeed to deal with this all the time. And I would suspect these components to be a significant part of the cost of any switching station they put somewhere in the field. It could very well be that the price of their solutions would kill your superiors, when they hear it . I only remember something about what they did for military grade radio devices at a place I used to work more than 20 years ago. They put in so called Transzorb diodes into the device for every signal path that was going to any outside connecter. Quite a few dozen in fact and each of them cost something like $ 100 back then and could be damaged easily during the soldering process of the PCBs. Granted they were supposed to not just help with lightning but even the EMP that would happen during some nuclear explosion but if that really would work and really would still matter when it happened is another story.
-
how to read word documents into my labview?
Rolf Kalbermatter replied to seoul's topic in Calling External Code
Fully agree and scan any links in a post for known site names. The last spammy post contains a much "promoted" site name. -
It's not about wires going behind VIs without being connected; that is something I have done myself maybe 5 times in over 20 years of LabVIEWing. But seeing to which terminal a wire is really connected. Of course that may seem often unnecessary because of different and incompatible datatypes of the terminals, but can be a real gotcha when a VI has several datatype compatible terminals. And yes it is not as important in my own VIs since I know how I wired them, but inheriting VIs from someone else is always much easier if such rules have been followed.
-
Are you a network engineer? This is quite specialized matter and I would not dare to start doing this without consulting a specialist in installing network infrastructure. The wrong protection circuit could rather slow down your network traffic and not really protect your network much from real surges. In general there is almost no technology that will not at least get fried itself by a direct lightning stroke to the installation or any attached appliance. But for less strong environmental impacts you still need to know quite a bit about both the network characteristics in question and the possible surges that you have. Is it static electricity or rather something else?
-
I can understand that sentiment But my rule comes from the fact that if I move a VI (by nudging it one pixel up and to the side with the cursor) to see if the wires are correctly attached to the right terminal, and yes that has been and is always one of my first methods when debugging code that appears to behave strangely, I want to see the wires move with it so I know approximately that they indeed are attached to the terminal they appear to. With hidden bends you don't have that at all and need to move the node sometimes many steps to see if they really attach correctly. And shift-cursor is not a good way to do it. And to Shaun and others, be happy I'm not in the LabVIEW development team. I would penalize any VI call that is not using the 4*x*x*2 connector pane by an extra 1 ms delay, and document it as a special optimization in the compiler of the 4*x*x*4 connector pane pattern.