-
Posts
3,892 -
Joined
-
Last visited
-
Days Won
267
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
The sourceforge repository for OpenG uses SVN which does not have pull and push requests like GIT. So the administrator of the OpenG Toolkit project, which I believe is still Jim Kring, would have to add your sourceforge ID to the project before you can actually commit anything to it. Anyone with commit rights can change anything about the OpenG Toolkit so it is not a right that everybody out there should have. In general it is anyhow preferable to have some discussion beforehand about proposed changes before committing anything to the repository.
-
C#/Measurement Studio and TCP with LabVIEW
Rolf Kalbermatter replied to GregFreeman's topic in Calling External Code
It's not really that difficult to stream data over a TCP/IP connection and in fact it is a bit more trivial in LabVIEW than in C(++)(#) but even there it is doable. You need to take care about byte order, which is big endian if you use the native LabVIEW flattening functions and also padding but in general LabVIEW is packing data as much as possible, except for booleans which are sent as byte. So you (or your collegue) will likely want to use some sort of C# library that allows to send data in big endian form over a stream. Most likely you will need to have some specific code on the C# side to flatten and unflatten the structures into and from the stream. Writing a general purpose library that can flatten and unflatten any form of structure into stream is most likely to much of a hassle, also because C# doesn't really know something like a cluster but uses classes for everything. So there is not a strict order in memory like for a structure in C. You could of course use create a library that uses reflection in C# to stream arbitrary structure classes on a wire but you have to be very careful that the order of elements in the class definition would stay consistent with what you use on the LabVIEW side in the cluster. -
I think the best approach is to actually post your proposed fix here for discussion. There will usually be some discussion about it and if it is considered useful and also doesn't break existing applications significantly, it is quite likely that it gets included in the next release, whenever that might be. But significant improvements to the code as it is in the sourceforge repository certainly warrant the effort to go through the hassles of releasing a new package. Unfortunately the current procedure about who does a new release is a bit unclear. I lack the detailed knowledge about the actual release procedures and also the time to commit to doing this on a regular base. Jonathan Green did a great job in releasing new packages when there was something to release but his focus has been shifting to other areas lately, which is unfortunate but of course his full right. But getting the discussion started about a proposed fix is for sure the first step anyways.
-
I wouldn't count on that! While it is theoretically possible, the implementation of many LabVIEW nodes is on a different level than the actual DFIR and LLVM algorithmes that allow for dead code elimination and invariant code optimizations. The LabVIEW compiler architecture does decompose loops and other structures as well as common functions like some array functions into DFIR graphs but other nodes such as File IO (Open Read, Write, Close) or most likely also Obtain Queue are basically mostly just calls into precompiled C++ functions inside the LabVIEW kernel and DFIR and LLVM can not optimize on that level.
-
Drift between Date/Time and Tick Count (ms)
Rolf Kalbermatter replied to mwebster's topic in LabVIEW General
You forget to take into account that almost all systems nowadays are set to synchronize their time to a time server (if they have any internet connection). This means that the RTC is periodically adjusted to whatever the configured timeserver reports (and smarter algorithms will attempt to do this adjustment incrementally rather than in one big step). The timer tick however is simply an internal interrupt timer derived from the crystal oscillator that drives the CPU. This oscillator is not very accurate as it really doesn't matter that much if your CPU operates at 2.210000000000 GHz or 2.21100000000000 GHz. The timer tick is used to drive the RTC in between time server synchronizations but not more. -
error 1097 means that the function somehow run into a problem like addressing invalid memory. This could happen if you pass a pointer to the MoveBlock function that does not point to memory long enough for the function to work on. For instance if you would pass an LabVIEW String which has been preallocated with 1000 characters and tell it to copy 2 * 1000 = 2000 bytes into it because the source is in Unicode. Or you forgot to preallocated the output buffer altogether or you do some other miscalculation when calculating the number of bytes to copy into the target pointer.
-
Please mention in the future if you do a crosspost as that helps potential people who want to answer, to see if they can add anything beneficial that hasn't already been answered in the other thread. There is no way to create a LabVIEW VISA resource in your calling C code AFAIK. The VISA resource is an internal LabVIEW datatype handle that contains both the underlaying VISA session as well as the name of the VISA Resource as used to open that resource and possibly other LabVIEW internal information. The only entity that knows how to create this resource is in the LabVIEW kernel. But your C application does not link to the LabVIEW kernel at all, and there is no easy way to determine which LabVIEW kernel your DLL uses to link to it. Even if you knew how to link to the LabVIEW kernel (runtime engine) that the DLL uses, there are no documented function in there to deal with VISA resources. Basically what you want to do is doomed. Instead you should replace the VISA resource Name control by a string control. Then you can configure it as string in the DLL interface and treat it as such in your C code. The only possible drawback is that in the DLL VI the VISA resource will be looked up every time you call that function, as the LabVIEW kernel will have to find any already opened session with that name or create a new one if it doesn't yet exist. A little search magic on the NI forum would have given you that answer too.
-
Problems with calling a DLL
Rolf Kalbermatter replied to Nut Buster's topic in Calling External Code
Yes, you are missing the fact that the two arrays in the struct are inlined and not a pointer. Even if it was a pointer, the LabVIEW array handle is NOT equivalent to a C array pointer and neither to an inlined array. What you need to do is replacing your arrays with a cluster of 100 U8 elements and 6 U8 elements respectively. Creating a cluster constant with 100 elements is rather inconvenient and time consuming. You best create these by converting an U8 array constant to the according cluster with the Array to Cluster primitive and then not forgetting to set the cluster size by right-clicking on the node and selecting the "Cluster Size ..." dialog. Then assemble the cluster/struct with a Bundle node. -
They of course can't do that out of the box. A VI in one application context does share absolutely no data with the same VI in another application context out of its own. That is even true if you run the FGV in two separate projects on the same computer and even more so if they run on different devices or different LabVIEW versions. As Jordan suggested you will need to implement some real interapplication communication here, either through network variables (most easy once you got the hang of it how to configure them and deploy them to the targets), or your own network communication interface (my personal preferences in most cases). There are also reference design projects for cRIO applications such as the CVT (Current Value Table) and accompagning CCC (CVT Client Communication) that show a possible approach for implementing the second solution.
-
Telco's have indeed to deal with this all the time. And I would suspect these components to be a significant part of the cost of any switching station they put somewhere in the field. It could very well be that the price of their solutions would kill your superiors, when they hear it . I only remember something about what they did for military grade radio devices at a place I used to work more than 20 years ago. They put in so called Transzorb diodes into the device for every signal path that was going to any outside connecter. Quite a few dozen in fact and each of them cost something like $ 100 back then and could be damaged easily during the soldering process of the PCBs. Granted they were supposed to not just help with lightning but even the EMP that would happen during some nuclear explosion but if that really would work and really would still matter when it happened is another story.
-
how to read word documents into my labview?
Rolf Kalbermatter replied to seoul's topic in Calling External Code
Fully agree and scan any links in a post for known site names. The last spammy post contains a much "promoted" site name. -
It's not about wires going behind VIs without being connected; that is something I have done myself maybe 5 times in over 20 years of LabVIEWing. But seeing to which terminal a wire is really connected. Of course that may seem often unnecessary because of different and incompatible datatypes of the terminals, but can be a real gotcha when a VI has several datatype compatible terminals. And yes it is not as important in my own VIs since I know how I wired them, but inheriting VIs from someone else is always much easier if such rules have been followed.
-
Are you a network engineer? This is quite specialized matter and I would not dare to start doing this without consulting a specialist in installing network infrastructure. The wrong protection circuit could rather slow down your network traffic and not really protect your network much from real surges. In general there is almost no technology that will not at least get fried itself by a direct lightning stroke to the installation or any attached appliance. But for less strong environmental impacts you still need to know quite a bit about both the network characteristics in question and the possible surges that you have. Is it static electricity or rather something else?
-
I can understand that sentiment But my rule comes from the fact that if I move a VI (by nudging it one pixel up and to the side with the cursor) to see if the wires are correctly attached to the right terminal, and yes that has been and is always one of my first methods when debugging code that appears to behave strangely, I want to see the wires move with it so I know approximately that they indeed are attached to the terminal they appear to. With hidden bends you don't have that at all and need to move the node sometimes many steps to see if they really attach correctly. And shift-cursor is not a good way to do it. And to Shaun and others, be happy I'm not in the LabVIEW development team. I would penalize any VI call that is not using the 4*x*x*2 connector pane by an extra 1 ms delay, and document it as a special optimization in the compiler of the 4*x*x*4 connector pane pattern.
-
Well Win8 RT is for several reasons not an option. Any win8 RT application has to be a full .Net application as it is based on the virtual machine execution of .Net to achieve hardware independence (so it works on Arm, Risc, and x86 CPUs alike). But the Pipes library uses Windows APIs that are prone to very slight platform differences in the kernel. While Windows tries to maintain backwards compatibility as much as possible, this API is exercised infrequently enough that they can let slip a few minor incompatibilities between Windows versions.
-
I wouldn't be surprised if the pipes offer higher throughput than network sockets. They are implemented in the Windows kernel most likely with some form of shared memory pool that is mapped into both processes. As such they do short circuit quite some overhead compared to going through the Winsock library. However I have no numbers available. As to the latest version of the available code, the earlier link to the CVS repository on sourceforge is indeed the most recent one that is available and more or less working. I did some more trials on this but didn't get any more reliable operation out of it, and also there is a good chance that it might have additional issues on Windows 8. This part of the Windows API is both rather complex and involved and can be influenced by many internal changes to the Windows kernel.
-
There is not an easy solution. The only proper way would be to implement some DLL wrapper that uses the Call Library Node callback methods to register any session that gets opened in some private global queue in the wrapper DLL. The close function then removes the session from the queue. The CLN callback function for abort will check the queue for the session parameter and close it also if found. That CLN callback mechanism is the only way to receive the LabVIEW Abort event properly in external code.
-
He just made a post replying to another post of one of his "buddies" asking if they have a trial version. Cute! Maybe Michael needs to add a specific check and reject any post with a link to yiiglo.com, rasteredge.com, and businessrefinery.com. Also interesting to know is that the "Company" link on the side arronlee likes to promote so much does not work at all. Always nice to do (online) business with a company that could literally sit in cloud 7 as soon as you are not happy about something.