Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. You made a copy paste error with your rinocloud link!
  2. I'm excited to announce that Lua for LabVIEW 2.0 for Windows has been released. Please go to http://www.luaforlabview.com to find out more about this. This release supports the 32 bit and 64 bit versions of LabVIEW for Windows. Support for other platforms including NI relatime targets will follow shortly.
  3. Someone certainly has a very unrealistic view about me here! Your problem are most likely the embedded pointers inside the structure pUiInfo, While it's possible in LabVIEW to allocate memory buffers and assign the resulting pointer to the entry in the cluster it is a big hassle. Also the structure needs to be different for 32 bit and 64 bit LabVIEW since pointers and Handles in Windows are of the size of the system too. An extra problem might be that on the MSDN page there is a user comment that claims that the ANSI version of this function is not working properly. Since LabVIEW uses ANSI strings everywhere this would require your strings to be translated to widechar UTF16 and then to call the W function. All in all it is a lot more work than 30 seconds for sure. And I'm not even sure it is so much safer. The Windows dialog uses Windows controls and they can be targeted from a different process with enough privilege escalation. The LabVIEW controls on the other hand are a lot harder to target from outside the LabVIEW process since they are fully implemented in LabVIEW itself. It would actually help to see what you have done so far. If it is something minor I'm certainly willing to point you into the right direction but I have no inclination at all to build this from scratch.
  4. I think copying a higher version vi.lib to a lower version device driver installation is almost certainly asking for trouble. While the newer version may use new APIs for enhanced operations, the lower version system driver would not support that. That can cause immediate trouble when loading the VIs as they might attempt to link to non-existing APIs in the old driver, or it may be only later at runtime apparent when certain low level device driver methods are invoked with new extended parameters that it does not support. So personally if you want to do such an installation chances are bigger that it will work if you install the highest level driver package with support for as many LabVIEW versions as possible and leave the vi.lib on the older LabVIEW versions with the highest supported driver version. In your example this would most likely mean to install DAQmx 14.0 etc. on the computer to get it install the daqmx and other drivers into the LabVIEW 2011 installation, then rename the LabVIEW 2011 folder temporarily to something else, so that the DAQmx 15.0 installer won't see it anymore and won't remove the DAQmx support from it. This leaves a pretty big change that it will still work after renaming the LabVIEW 2011 folder back after the new drivers have been installed. But as explained earlier I would not recommend that solution for a production quality build system at all as you may end up with obscure errors that are very hard to debug and a bug report to NI won't help much as you work with an unsupported installation.
  5. Theoretically, the shared resources would be upwards compatible such that the 2011 VIs "should" be able to work with the 2015 binary resources (shared system libraries and device drivers). Practically anyone who has written such drivers knows that this is VERY difficult to do and absolutely impossible to guarantee without explicitedly testing it all in every detail. Now take into account that many of the NI drivers are for multiple platforms (Windows 32-bit and 64-bit, Mac OS X 32-bit and 64-bit, Linux 32-bit and 64-bit, Pharlap ETS, VxWorks, NI Realtime Linux) and NI does provide usually backwards compatibility for the last 3 LabVIEW versions and you see quickly that adding even one more version to this is definitely going to have a huge extra impact in testing. And everytime there is an incompatibility on any of those combinations someone has to go in and make a fix and then testing again all around. If you don't limit the scope there somehow you end up testing, fixing and testing again for unlimited amount of times and no product is released anymore. I have tried such installations in the past but not for production type development. It was mostly to be able to look at older source code without having to load it into a newer LabVIEW version. Never really executed anything substantial on real hardware. The recommendation about using Virtual Machines is actually the most sensible in this case, aside from having dedicated hardware for each version.
  6. Does anyone know who maintains the labviewwiki.org site and if the service is temporarly down or should be considered discontinued?
  7. They work fine if you use them without authentication or from LabVIEW to LabVIEW. Otherwise you run into trouble since NI has so far refused to document their NIAuth mechanisme used in them!
  8. Just a short heads up. There is going to be a new release of Lua for LabVIEW 2.0 in the next few days. The initial release will be for LabVIEW for Windows 32 Bit and 64 Bit only. Linux, MacOS X and NI realtime target support will follow begin of next year. Currently I'm testing the software and cleaning up the documentation for it. Keep an eye on http://www.luaforlabview.com for more news about this.
  9. .Net is in some ways better than ActiveX in the areas Shaun mentions. ActiveX is an extension of OLE and COM which have their roots in old Windows 3.x times. Back then preemptive multitasking was something reserved for high end unix workstations and PCs had to live with cooperative multitasking. So many things in Windows 3.1 and OLE and COM assumed single threading environment or at best what Microsoft called apartment threading. This last one means that an application can have multiple threads but any particular object coming from an apartment threading component always has to be invoked from the same thread. LabVIEW having started on Mac and then ported to Windows 3.1 heavily inherited those single threading issues from both OSes. It was "solved" by having a so called root loop in LabVIEW that dispatched all OS interactions such as mouse, keyboard and OS events to whatever component in LabVIEW needed them. When LabVIEW got real multithreading support in LabVIEW 5 this root loop was maintained and located in the main thread that the OS starts up when launching LabVIEW. It is also the thread in which all GUI operations are executed. Most ActiveX components never supported anything more than apartment threading as that kept development of the component more simple. LabVIEW does honor that by executing the methods and property calls for those ActiveX components from the main thread (called usually UI Thread). That can have certain implications. Out of context or remote ActiveX components are thunked by Windows through the OLE RPC layer and the according message dispatch for this OLE thunking is executed in the Windows message dispatch routine that is called by LabVIEW in its root loop. Any even slight error in the Windows OLE thunking, ActiveX component or the LabVIEW root loop in how to handle the various events properly can lead to a complete lockup of the message dispatch and with that the root loop of LabVIEW and absolutely nothing works anymore. Theoretically other threads in LabVIEW can continue to run and actually do, but without keyboard, mouse and GUI interaction an application is considered pretty dead by most users. .Net is less suspicable to such problems but not entirely free as it still inherits various technologies from COM and OLE deep down in its belly. My personal issue with both is that they involve a very complex infrastructure in addition to the LabVIEW runtime that is: 1) has to be loaded on application startup delaying the startup even more 2) while very easy to use when it works, almost impossible to understand when things go wrong 3) being a Microsoft technology has a big chance of being obsoleted or discontinued when Microsoft tries to embrace the next hype that hits the road (DDE while still present is dead, OLE/COM superseded by ActiveX, and ActiveX is highly discouraged in favor of .Net now, Silverlight has been axed already) Betting on those technologies has had a very good chance of being siderailed so far, as NI had to find out several times including with Silverlight the last time.
  10. That's right. Before LabVIEW 5, Booleans were 16 bit integers and Boolean arrays were packed into 16 bit integers too. That had however several performance penalties in several places and was anything but standard to anything else except some MacOS precedences, so was dropped in favor of a more standard implementation with bytes for each boolean. Reality is that there is no such thing as a perfect implementation for every possible use case. Your packed array solution you assumed LabVIEW to use, had severe performance limitations in LabVIEW 3 and was therefore dropped in favor of a more common way that did consume more memory when boolean arrays were involved but made conversion between boolean arrays and other formats simpler and generally more performant. But there is a simple rule: If you are concerned about this kind of performance optimization then don't use boolean arrays at all! They involve memory manager operations all over the place and those are magnitudes slower than a few hundred CPU cycles with which you can do just about any boolean operation you might ever dream up. For such performance optimization you need to look at properly sized integers and do boolean arithmetic on them. And if your "boolean array" gets over 64 bits you should definitely look at your algorithme. Most likely you have chosen the easy path of working with boolean arrays in order to not have to think about proper algorithme implementation but if you go that path, worrying about sub microseconds optimizations is definitely a lost battle already. One of the worst performance killers with the packed boolean array implementation in LabVIEW 3 was autoindexing. Suddenly a lot of register shifts and boolean masking had to be done on every autoindex terminal for a boolean array. That made such loops magnitudes slower than a simple adress increment when using byte arrays for boolean arrays.
  11. srvany is not the same as sc.exe or srvinstw.exe. The first is a sort wrapper that allows to wrap a standard executable in a way that it will run as a service and at least properly respond to service control requests (although things like stopping the service really will more or less simply kill the process so it isn't a very neat solution, but it works for many simple things. sc.exe is THE standard Windows command line tool to the service control manager almost for as long as Windows services exist. The interface to the service control manager is really located in advapi32.dll and can be also directly called from applications (but many functions will require elevated rights to work successfully). Not sure about Srvinstw.exe but the standard way of interacting with the service control manager through a GUI is through the MMC snapin for services which you can reach through the Control Panels->Administrative Tools. While srvany works reasonably well for simple tasks, for more complicated services it is really better to go through the trouble of integrating a real interface to the service control manager in an application meant to be executed as a service.
  12. I second everything Matt said here.
  13. Not really removed LabVIEW 2013 is the first to officially support any of the Linux RT targets. So it was simply never added.
  14. Yeah after rereading it a few more times I got the feeling that something like that had happened. I think it's not worth the effort. Things are pretty clear now and editing posts substantially after the fact is something that is generally not considered helpful nor correct. We all write sometimes things that turn out later to be badly written. For myself I usually limit editing for posts to fixing typos and adding an occasional extra information that seems useful.
  15. If you look at the link from Fab, you can see that you should probably use LocalAppDataFolder instead if you want the non-roaming version of it.
  16. And here you walk into the mist! LabVIEW is written in C(++) for most of it, but it doesn't and has never created C code for the normal targets you and I are using it with. Before LabVIEW 2010 or so, it translated the code first into a directed graph and then from there directly into machine code. Since then it has a two layer approach. First it translates the diagram into DFIR (Dataflow Intermediate Representation) which is a more formalized version of a directed graph representation with additional features. Most of the algorithme optimization including things like dead code elimination, constant unfolding and many more things are done on this level. From there the data is passed to the open source llvm compiler engine which then generates the actual machine code representation. At no point is any intermediate C code involved, as C code is notorously inadequate for representing complex relationships in an easy to handle way. There is a C generator in LabVIEW that can translate a LabVIEW VI into some sort of C++ code. It was used for some of the early embedded toolkits such as for the AD Blackfin Toolkit, Windows Mobile Toolkit, and ARM Toolkit. But the generated code is pretty unreadable, and the solution proofed very hard to support in the long run. You can still buy the C Generator Addon from NI which gives you a license to use that generator but its price is pretty exorbitant and active support from NI is minimal. Except under the hood for the Touch Panel Module in combination with an embedded Visual C Express installation it is not used in any currently available product from NI AFAIK.
  17. Unfortunately I haven't found much time to work on this in the meantime. However while the ping functionality of this library was a geeky idea that I persuaded for the sake of seeing how hard it would be (and turned out to be pretty easy given the universal low level API of this library), I don't think it has much merit in the context of this library. In order to implement ping directly on the socket library interface one is required to use raw sockets which are a privileged resource that only processes with elevated rights can create. I'm not seeing how it would be useful to start a process as admin just to be able to ping a device. And someone probably will argument that the ping utility in Windows or Linux doesn't need admin privileges. That is right, because that is solved under Linux by giving the ping utility special rights during installation for accessing raw sockets and under Windows through a special ping DLL that interfaces to a privileged kernel driver that implements the ping functionality. At least under Windows this library could theoretically interface to that same DLL, but its API doesn't really fit in easily in this library and I didn't feel like creating a special purpose LabVIEW API that breaks the rest of the library concept and only is possible under Windows anyhow.
  18. Why do graphs have autoscaling on their axis? The IMAQ Vision control has the same for the range. Certainly not a bug but sometimes not what you expect. If you show the Image Information you also see the range that the image currently uses. And you can change that directly in there or through properties.
  19. I would guess this is only true if you use compiled code separated from the VI. Otherwise the according binary compiled code resource in the VI will be very much different and definitely will have some indication of bitness. Still for the rest of the VI it most likely indeed doesn't matter at all, especially for an empty VI. There might be certain things on the diagram that change but I would guess almost all of it is code generation related, so would actually only affect the VI itself if you don't use seperate compiled code.
  20. I'm not really understanding what you are saying. For one, the case that UI controls configured to coerce their values would do the coercion also when the VI is used as subVI: that was true in LabVIEW 3.0 and maybe 4.0 but was removed after that because it was considered indeed a bad idea. I should say that quite a few people got upset at this back then , but you seem to agree that this is not desirable. As to using the fact that the Value Change event does not get triggered to allow you to alert a user that there was a coercion??? Sorry but I'm not following you here at all. How can you use an event that does NOT occur, to trigger any action?? That sounds so involved and out of our space-time continium that my limited brain capacity can't comprehend it. But you can't use the fact that the value is the same to mean that a limit has been reached. The user is free to go to the control, type in exactly the same value as is seen already, and hit enter or click on something else in the panel and the Value Change event will be triggered. Value Changed simply doesn't mean that there is a different value, only that the user did something with the control to change the control to the same or a different value. Sounds involved I know but there have been many discussions both in the LabVIEW devlopment team as well as in many other companies who design UI widget libraries and they generally all agree that you want to trigger on user interaction with the control for maximum functionality and leave the details about if an equal value should mean something or not to the actual implementor. The name for the event may indeed be somewhat misleading here. In LabWindows CVI NI used the VALUE_COMMIT term for the same event. However I suppose the word "commit" was considered to technical for use in LabVIEW.
  21. I'm not sure I would agree her fully. Yes security is a problem as you can not get at the underlaying socket in a way that would allow to inject OpenSSL or similar into the socket for instance. So TCP/IP using LabVIEW primtives is limited to unencrypted communication. Performance wise they aren't that bad. There is some overhead in the built in data buffering that consumes some performance, but it isn't that bad. The only real limit is the synchronous character towards the application which makes some high throughput applications more or less impossible. But that are typically protocols that are rather complicated (Video streaming, VOIP, etc) and you do not want to reimplement them on top of the LabVIEW primitives but rather import an existing external library for that anyways. Having a more asynchronous API would be also pretty hard to use for most users. Together with the fact that it is mostly only really necessary for rather complex protocols I wouldn't see any compelling reason to spend to much time on that. I worked through all this pretty extensively when trying to work on this library. Unfortunately the effort to invest into such a project is huge and the immediate needs for it were somewhat limited. Shaun seems to be working on something similar at the moment but making the scope of it possibly even bigger. I know that he prefers to solve as much as possible in LabVIEW itself rather than creating an intermediate wrapper shared library. One thing that would concern me here is implementation of the intermediate buffering in LabVIEW itself. I'm not sure that you can get a similar performance there than doing the same in C, even when making heavy use of the In-Place structure in LabVIEW.
  22. Hooovahh mentioned it on the side after ranting a bit about how bad the NI INI VIs were , but the Variant Config VIs have a "floating point format" input! Use that if you don't want the library to write floating point values in the default %.6f format. You could use for instance %.7e for scientific format with 7 digits of precision or %.7g for scientific format with exponents of a multiple of 3.
  23. I wonder if this is very useful. The Berkeley TCP/IP socket library, which is used on almost all Unix systems including Linux, and on which the Winsock implementation is based too, has various configurable tuning parameters. Among them are also things like number of outstanding acknowledge packets as well as maximum buffer size per socket that can be used before the socket library simply blocks any more data to come in. The cRIO socket library (well at least for the newer NI Linux systems, the vxWorks and Pharlap libraries may be privately baked libraries that could behave less robust) being in fact just another Linux variant certainly uses them too. Your Mega-Jumbo data packet simply will block on the sender side (and fill your send buffer) and cause more likely a DOS attack on your own system than one on the receiving side. Theoretically you can set your send buffer for the socket to 2^32 -1 bytes of course but that will impact your own system performance very badly. So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes? Only the final high level protocol can really make any educated guesses about such limits and even there it is often hard to do if you want to allow variable sized message structures. Limiting the message to some 64KB for instance wouldn't even necessarily help if you get a client that maliciously attempts to throw thousends of such packets at your application. Only the final upper layer can really take useful action to prepare for such attacks. Anything in between will always be possible to circumvent by better architected attack attempts. In addition you can't set a socket buffer above 2^16-1 bytes after the connection has been established as the according windows need to be negotiated during the connection establishment. Since you don't get at the refnum in LabVIEW before the socket has been connected this is therefore not possible. You would have to create your DOS code in C or similar to be able to configure a sender buffer above 2^16-1 bytes on the unconnected socket before calling the connect() function.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.