Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. Most likely things are a little trickier than that. The Create Listener function has an optional net address input that allows to bind the according listen socket only to a specific network adapter. But leaving that open will bind the listen socket to all network adapters. And this is most likely what the VI server does too. It can't really know which interface you want to use and making that configurable adds yet another setting and makes the VI server configuration more complex. So binding to all interfaces is the most simple approach here. Rolf Kalbermatter
  2. Not really . I think you managed to come up with a few possible obstacles and problems that if they surface, might kill this idea too. But I have no other good alternative at the moment for you. Rolf Kalbermatter
  3. Adams suggestion is correct and should work. He only omitted that the string name you will get will always be 32 characters long with zero characters after the end of the string. So after converting the name cluster into a byte array with Cluster to Byte Array, you should do a search for the first 0 byte and cut the array there and only after that convert it into a string. Also the pad cluster you can simply drop. It will most likely not contain any useful information but is just there to make sure the array elements align properly in memory. Rolf Kalbermatter
  4. Actually they can. LabVIEW knows internally a data type called subarray. It is just a structure containing a reference to the original array handle and some bookkeep information such as offset, stride, and whatever. Most array functions know how to deal with subarrays and if they don't for a particular subarray configuration they will invoke a subarray to normal array conversion, which will of course incur a new buffer allocation. Well I would be pretty sure LabVIEW does handle these things in an object oriented manner, so it is not such a complicated thing but more a well structured object method table to handle various variations of arrays and subarrays. The reason why they do it is performance optimization. Memory allocations and copies are very expensive so spending some time to try to avoid them can pay off big time. Rolf Kalbermatter
  5. Stock Wine has not very well support for building under Solaris. The problem here is not that it can't be done but Wine is a moving target and its main development is obviously Linux based. There are only very few and sparse developers working on Wine for Solaris and the patches for that coming to Wine are very few. LabVIEW 7.x most likely would run nowadays on Wine, since the tests you mention are very old and at a time when Wine was still considered in its infancy despite its age of more than 10 years back then . Running Linux Apps under Solaris seems to me like an exercise in vain. Rolf Kalbermatter
  6. LabVIEW's typecast is more complex than that. It is in essence a typecast like what you see in C but with the extra twist of byte swapping any multi-byte integer to be in Big Endian format on the byte stream side. I think the problem here is that Unflatten does other things like checking the input string length to be valid and whatever. The implementation of Unflatten is certainly a lot more complex since it has to work with any data type including highly complicated variable sized types of clusters containing variable sized data, containing ...... Typecast on the other hand only works on flat data which excludes any form of clusters containing variable sized data. Possibly Flatten/Unflatten could be improved since little endian conversion on a little endian machine should certainly not take longer than the Typecast and additional byte swap, but the priority for such a performance boost might be rather low, since it would certainly make the implementation of Flatten/Unflatten even more complex and hence more prone to bugs in the implementation. But thanks for showing me that the good old Typecast/Swapping still seems to be the better way than using Flatten/Unflatten with the desired endian setting . The reason for this is that LabVIEW originates from the Mac with its 68000 CPU which was always a big endian CPU. While the later PPCs in the PPC Macs had the option to either use big or little endian as preferred format, Apple choose to use the same big endian format that came from the 68k. When NI ported LabVIEW to Windows (and other architectures like Sparc and PA Risc later) they had to tackle a problem. In order to send binary data to a GPIB device or over the network, one had always used the Typecast or Flatten operator to convert it into the binary string and it would have been very nice if the data sent over the network or written into a binary file by a LabVIEW program on the Mac, could be easily read by a LabVIEW program on Windows. This required the same byte order for flattened data, so the flattened format was specified to be big always endian, independent of the platform LabVIEW is running on. A C typecast will be difficult to do in LabVIEW. Trying to do that with a small external code could be an option but it is quite tricky. It's not enough to simply swap the handles but you also need to adjust the array length in the handle accordingly so a different function for each different integer sizes would be required. Rolf Kalbermatter
  7. Well, I just checked it again. It has been a long time that I have worked with this but DDE Server is unfortunately not an option here using the LabVIEW DDE VIs. The LabVIEW internal DDE callback specifically refuses to receive DDE Execute commands and that is exactly the method used by the Windows shell to pass such requests to an application. You would have to implement your own DDE server in C and integrate it as DLL which I think is not a useful exercise in terms of effort required and the benefit you get. As to the solution to your problem I'm sure it has been talked about in the threads linked to from this one as well as in a previous post by me and others in this thread before. With any respect but I find this remark rather amusing. DDE is an old legacy technology. If it is any more secure than TCP/IP then only by its obscurity but certainly not by its way of implementation. DDE is a technology that origins from Windows 3.x days when applications had no seperate virtual memory and could write in each others memory anyway they wanted. Great for implementing interapplication communication schemes like DDE since you had almost nothing to do to allow that. Absolutely not so great for security. In order to make DDE work in Win32, they had to jump through many hoops and add a lot of code that does some obscure things deep in the windows event management to allow for proper operation of it. As such I would never trust it to be really secure, other than the obscurity fact, since very few people know nowadays about DDE and how to use it. The use of TCP/IP (either explicitedly or through VI server) has the advantage that everything will keep working exactly the same if you happen to use this on a non Bill Gates sanctioned OS . There is AFAIK no ActiveX server method to assign file extensions. Even if there would be an alternative that allows to invoke an application based on file extension, the LabVIEW ActiveX server would not be compatible since it only exports its own ActiveX Class Interface and not any other ones. Microsoft would for sure have designed a specifc COM interface for this, that such applications would have to expose, but as said I'm not aware of an alternative ActiveX activation based on file extensions. How things currently happen: If a file extension has a DDE Server registry entry the Windows shell simply tries to contact that server. Failing that it will launch the executable and optionally pass it the file as command line parameter (command/open verb contains the %1 parameter). If that parameter is missing it will try again to contact the application through DDE and pass it the open verb with the file path. Rolf Kalbermatter
  8. There are many ways and what is the best will depend on you. But the only one that does not involve tackling not properly working private LabVIEW methods, properties and events, or writing some bunch of C code to do the "right" © thing in a DLL and integrate that into your app, is to go about it like it is explained in the wiki. The advantage of this is also that it will basically work in all versions and platforms of LabVIEW that support the pass command line parameters feature. This is every desktop LabVIEW version since at least LabVIEW 7.0. And I think you have all the information necessary to come up with a working solution in a day or so. So get started and show us the code, when you do not get any further. Rolf Kalbermatter
  9. Battler, adding a comment to that idea is nice but supporting it by voting for it and click on the Kudos button would probably help more. These ideas are all weighted and the decision to implement them is based on various criteria such as: - The necessary work to do it (not terribly much but it is a tricky thing to get right for all LabVIEW platforms and will require quite some testing) - The availability of resources (developers and their time) - and last but not least the number of votes an idea gets Rolf Kalbermatter
  10. It adds properties and methods to the LabVIEW VI server hierarchy, mostly application related and presumably project and other such stuff, that NI considers to dangerous, untested, or giving to deep insight into LabVIEW. It is related to scripting but not the same thing. Rolf Kalbermatter
  11. It's considered friendly to mention cross posts to other boards even if they are on the "dark side". During a recent crash all of the content got lost and had to be restored from backups. This broke many links. The admins can fix them if they know where to look exactly but this is manual work so be kind and give them the exact information about what is broken and some time to fix it. VIPM is the flagship software from JKI. So they are not likely giving out the source code for that. But once the links are restored you should get the examples you need. But the principle is not so complicated. First you have a tiny little LabVIEW EXE file that gets assigned the file extension(s). It is always started with the filename as command line parameter and it takes those command line parameters, opens an interapplication communication channel (TCP/IP, DDE, etc) to your real application and passes it the command line parameters over that. After that it simply terminates to be launched again. Of course there is a little more to it since the command line intercepter has to check for the main application to be started and if that is not the case do so first. Unfortunately this is absolutely necessary in pre-8.2 scenearios since the LabVIEW DDE server functions never implemented receiving DDE Execute commands (the client functions support sending them though ) and that is how the Windows shell passes those requests to an already started application. I once tried to implement a new version of the DDE functionality in LabVIEW that would support receiving DDE commands too, but never got really far due to time constraints and lack of any need for that for myself. From 8.2 you have this private Open Document event in the event structure. It seems to be there since and not really changed but there are troubles with this, that it throws an unknown LabVIEW document error the first time after the application has been launched. Not sure if this problem still exists in the most recent LabVIEW versions. Rolf Kalbermatter
  12. I have found that having a wire and getting its Terms[] property you can get directly to the object (node, terminal or whatever) that wire is connected to. For Control class (Control & Indicator terminals) the Term is directly the control (cast it using To more specific class) while for other objects you can use the Owner Property to get at the node, structure or whatever that owns the term. As to the first element in a Terms[] array always being a source, that is not entirely true. If the wire has no source at all (broken wire) the first element will be one of the sinks it is connected to. Rolf Kalbermatter
  13. Why not turn the whole idea around? You do want callback support and ActiveX (long ago) and .Net (since version 8) interfaces in LabVIEW support it out of the box. So the real solution would be to write a thin ActiveX or .Net wrapper around your DLL code that translates the DLL callback in an according ActiveX or .Net event. Then your DLL invokes the callback funciton which in turn sends the ActiveX/.Net event to LabVIEW and LabVIEW invokes the according callback VI which then returns whatever data is required to the ActiveX/.Net event and that returns control to the callback. Yes it is not trivial and it would not work for tight low level kernel type callback drivers, but nothing will directly work with such callback drivers from a high level environment like LabVIEW. You could make it sort of work from a simple C program but certainly not a .Net application or such either. If you need to pass large amounts of data from the callback to LabVIEW or vice versa you would have to opt for an in-process ActiveX or .Net wrapper otherwise you can go with an out of process wrapper too. ActiveX/.Net will take care of marshaling the data for out of process servers between said server and LabVIEW (and .Net may do marshaling no matter what). Marshaling is ok for small amounts of data but if you plan to move large amounts of data it is going to create an additional bottleneck. Rolf Kalbermatter
  14. Why do you say this is inferior? It is not, because doing that is basically the only proper way to merge a C callback with the LabVIEW dataflow programming. There are several possibilities depending on the requirements you have. 1. Calling the PostLVUserEvent() function you can pass data back to LabVIEW, in fact any data you want but you need to create LabVIEW compatible native data for that (LabVIEW handles for strings and arrays). The event is then processed in the according user event in the event loop, so there is a possible problem with serialization of all the callbacks if processed in the same event structure. 2. You can use Occur() to trigger a LabVIEW occurrence. The advantage is that it is not serialized with other occurrences that you might trigger from other callbacks but the disadvantage is that you can not pass data back with the occurrence trigger. 3. To solve the problem with the occurrence not having any means to pass data with, you can do your own queue code in C that holds elements that the callback functions put in before they trigger the occurrence. When the occurrence is triggered in LabVIEW you can read the queue for new data. So it is definitly not inferior to do that. The only problem with this is that you do need to write some (good and not very trivial) C code, but that is no reason to call this inferior. The main issue with this is not so much the fact that they need to access GOOP data or any LabVIEW data, but much more a maintenance nightmare. As soon as those callbacks DLLs are not compiled in exactly the same version as the LabVIEW version you use to call them, those DLLs will execute in the according LabVIEW runtime engine and as such are in fact almost out of process from the calling LabVIEW environment. This will indeed make the sharing of data between the DLL and the calling LabVIEW process impossible but there are also other problems why you rather don't want to do that. Actually LabVIEW executables can serve as Activex Automation server just as well as the LabVIEW development system. Check out the build settings of your project for the place where you enable ActiveX server and specify the name under which it will be registered. But the problem with ActiveX for doing things will be probably the same as for TCP/IP since they are both out of process technologies so if TCP/IP won't work, ActiveX likely will be even worse. Because ActiveX and .Net have a very strictly typed interface description where LabVIEW can get all the necessary information to create the interfacing code for you. In comparison, direct C code interfaces have no formal description of the data types and calling interfaces at all, and no, a header file does not count as that by a very long stretch since it is missing a lot of information that is sometimes buried somewhere in a text documentation, but more often is simply the result of programmer intuition and trial and error with a good source code level debugger. These last three things are something LabVIEW is still several light years away of being able to do. Of course there could be something like the LabVIEW Call Library Configuration dialog to allow to configure callback interfaces to VIs that can then be passed as function pointers to Call Library Nodes, but the according dialog would likely be even more complicated than the Call Library Node dialog and, considering how much difficulties most LabVIEW users have with the existing CLN already, as such very hard to use for more than 99% of the LabVIEW users. Seems to me you got stuck between a wall and a hard place. Why even porting 3.5 MB C code to LabVIEW at all? From what I see you seem to need to actually decide in the callback whatever and return data from there to the caller of the callback so this means synchronous operation of the callback. In that case you are right that the only possible solution would be to wrap LabVIEW VIs into a DLL, making sure this DLL is created in exactly the same LabVIEW version as your calling LabVIEW application and then GetProcAddress() those DLL function pointers and pass them to the Call Library Node as callback pointers. As I already said this is going to be a rather messy maintenance nightmare and another problem is there too! The C wrapper created for calling the actual VIs in those DLLs has to process and convert all variable sized data from the C function pointer into LabVIEW handles and before returning again back to C pointers. This can be a lengthy and performance hungry process for the amount of data you seem to think would make a TCP/IP interface unfeasible. Rolf Kalbermatter
  15. Yes you have to! Don't configure those as strings since LabVIEW strings are not only no C string pointer but also no fixed size strings. Leave them as the byte clusters as you did before and convert them to strings after you got the data from the DLL. You need to setup the clusters just the same as before but doing this with Adapt to Type and pass as C array pointer (only possible in LabVIEW 8.2 and above) will save you the hassle of typecasting/unflattening and byte and word swapping. Rolf Kalbermatter
  16. Very easy by using your own installer technology, like InnoSetup or InstallShield. Rolf Kalbermatter
  17. Well I do have some cluster sometimes but it contains only things that are specific to the entire state machine such as a flag to remember the previous state for some cases if I need to do special handling depending from where the current state was coming from. Of course this could be entirely avoided with a different state separation but sometimes it is easier to add such a special handling after the fact than to redesign several states more or less completely. Rolf Kalbermatter
  18. I personally wouldn't do that either. I would tend to actually put those object references into functional globals too. (But that would be isolating and encapsulating the object itself into a sort of higher class which I do tend to do in my current functional globals design, so that might be why I can't see the additional benefit of LVOOP getting involved in the picture.) Rolf Kalbermatter
  19. What version of LabVIEW is that in? Any chance that your CAN VIs return an error in the error cluster sometimes? The Release Semaphore Vi in versions prior to LabVIEW 8.6 did only execute when no error in was passed to it. Rolf Kalbermatter
  20. If the clock arms do not need to look exactly like this you could use a gauge. Otherwise it will not really work anyhow in LabVIEW. The arms would need to be imported as graphics and doing that as vector graphic would be difficult to impossible in all versions of LabVIEW so far, but rotating bitmaps is a very ugly thing to do. Rolf Kalbermatter
  21. It seems to me you are keeping all the data the states will work on in this shift register. This is in most cases not a very good idea as it ties the entire state machine implementation very strongly to the data of all states, even though most states will only work on some specific data on this. There is also a possible performance problem if your state data cluster gets more and more huge over time. I have only a very limited state data cluster in my state machines and it is limited to containing data that is important for most of the states and directly important to the state machine itself. The rest of my application data is stored in various functional variables (uninitialized shift registers with different methods). (And yes I know it is a lot like doing LVOOP without the formal framework of LVOOP and I should be looking into using LVOOP, but have not yet found the drive and time to do so.) The various VIs do pull whatever data they need from those functional variables when they need it and where they need it and put the data back in there when needed. This keeps the state machine implementation very much decoupled of the application data itself and allows for much easier addition and modifications later on. The state machine itself will not document what data it uses but I feel this to be information that is not exactly part of the state machine design itself. It should be defined on a much higher level (anyone saying application design specification?) The individual data should not really be important to the different states in most cases but what you want to know is what state transitions your state machine involves when, where and how. Rolf Kalbermatter
  22. Well, Swap Words swaps the two 16 bit values in a 32bit value. It has no effect whatsoever on 16bit values (word) or 8bit values (byte). So you do not need to apply the Swap Bytes and Swap Words to each individual value but can instead put them simply in the cluster wire itself. They will swap where necessary but not more. Rolf Kalbermatter
  23. Adam, you did miss in your previous post that the original problem was not about interfacing a DLL specifically made for LabVIEW, but about interfacing to an already existing DLL. In there you have normally no way that they use LabVIEW data handles! Your last option is however a good one. I was missing that possibility since the Pass: Array Data Pointer for Adapt to Type is a fairly new feature of the Call Library Node. It seems it was introduced with LabVIEW 8.2. And there is no pragma necessary (or even possible since the DLL already exists) but the structure is using filler bytes already to align everything to the element size boundaries itself. The void arg[] datatype generated by LabVIEW is not really wrong or anything as it is fully valid C code, it is just not as perfect as LabVIEW could possibly make it, but I guess when the addition of the Pass Array Data Pointer to the Adapt Datatype was done, that person did not feel like adding extra code to generate a perfect parameter list and instead took the shortcut of printing a generic void data pointer. Rolf Kalbermatter
  24. I tried to report http://lavag.org/top...dpost__p__63616 as possible spam but got that error message. And earlier in other sub forums with other messages the same. Rolf Kalbermatter Admin edit: The report didn't make it through to the report center.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.