Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Wow James! How should we know that csrc=0x21 means a boolean? Of course now you have said it, I remember the typecodes of the typedef data, but would never have come on my own to this conclusion. Also is it just a problem with booleans on the conditional terminal or the conditional terminal in general? With booleans I could live easily as I usually would use bitflags anyhow for such applications.
  2. You might have to make sure the instrument driver properly terminates all send commands with CR, LF or CR/LF depending on your instrument for it to work over RS-232. On IEEE-488.2 there is a specific handshake line to indicate an end of message, and most instruments support that by default (unless they are from before 1980 or so). On RS-232 there is no such way to indicated the end of a command, so this needs to be done with a specific character sequence, which is device dependent. Almost all devices offering GPIB and RS-232 options, will accept the additional character sequence without any trouble on GPIB messages too. Also make sure you initialize the VISA sessions to terminate all reads on the last character of your device specific termination character sequence. That can be done through properties in your initialize VI. Usually instrument drivers that are properly programmed for multi-interface operation, do read the "interface type" property and do a few additional configuration settings in case of serial interface, such as the "terminate on read" setting, but also baudrate, bitlength, stopbits and partity which are important for RS-232.
  3. At the time I had started with this (LabVIEW 5) there were no shared variables (and when they arrived much later they had a lot of initial troubles to work as advertized) so the only options were to integrate some external code or DIY. An additional feature of my own simple TCP protocol is that I could fairly easily add support for other environments like Java on Android without having to wait for NI to come up with this, if they ever will. Of course I agree that it is probably a very bad idea to mix large high latency messages with low latency messages of any sort in the same server, be it LabVIEW or something else. They tend to have diametrical requirements that are almost impossible to solve cleanly in a single design. Using clone handler VIs does allow to alleviate this limitation but at some serious cost as each VI clone takes up considerable resources and it still complicates the design quite a bit when you mix these two types in the same server. Also using VI clone handlers involves VI server and the potential to get locked up for extended times by the UI thread/root loop issue. And shared variables are not the contra-proof to this, I wouldn't consider them really low latency although they can work perfect in many situations. So yea if I would have to start again now I might go with shared variables instead, although without DSC support they still have some limitations like real dynamic deployment and also the aforementioned event support, but that last one is not something my own TCP based protocol would offer out of the box either. I wouldn't go as far as declaring that as pointless, but it is a limitation of course.
  4. You need to close the Notifier refnum wherever you want the consumer loop to shutdown. That should be enough to make the consumer loop exit on the resulting error. You may have to work a little on the Get Notifier Status inside the case structure. The current example mixes in fact the actual notifier event with a means to detect if the inner loop should terminate. I personally would remove this and the inner loop completely unless you really want that loop inside the consumer to execute as long as the switch is on.
  5. My understanding of this is, that this is actually fully ok. Especially since it is LGPL and you are not really linking hard to the executable at all. In fact most people would think that even calling a GPL executable in such a way would be fine, but some puritans claim that any form of linking with GPL makes your program GPL too, unless there is a specific exclusion clause like with the Linux kernel. I personally would never distribute an application that makes use of GPL components in any way, without making everything GPL, just to be save even-though I believe that calling a GPL executable through SystemExec or such should be ok. Strictly speaking if this was not ok, one could not install a GPL application on a non-GPL OS since the act of double clicking the executable links that executable to the OS too, so the launcher of the application would make a GPL violation if he starts the application on a non-GPL OS. You would have to distribute the LGPL license text somewhere with your installation and preferably also some notice to what part that applies to as well as where one can find the source code (project website is enough) and what version you were using if the executable doesn't allow easy identification itself.
  6. Actually as the original copyright holder you are entirely free to give it to NI and sell it too, unless you give it to NI with a clause in the license to the contrary. If it would be a commercially useful exercise is a completely different story, but legally you own the code, and most likely would simply allow NI to use it too and distribute it with LabVIEW, not to pass ownership of the code to NI.
  7. Well when I was talking about polling I didn't mean the server to poll the clients but rather the old traditional multi client TCP/IP server example that adds incoming connection requests to an array of connections that is then processed inside a loop continously with a very small TCP Read timeout. Polling is probably the wrong name here. Unlike the truely asynchronous operation with one server handler clone being spawned per incoming connection, this solution simply processes all incoming data packets sequentially. While this can have potential problems in response time, if there are large messages to be processed and/or many parallel connections needing to be served, it completely avoids any root loop issues as it is not using any root loop synchronized LabVIEW nodes. I have done several applications with an architecture based on this scheme and aside from some race conditions in handling TCP Reads with low timeouts in LabVIEW 5.0 that could crash LabVIEW hard this has always worked fine, even with several parallel connections needing to be serviced. One architecture is based on a binary data protocol similar to the CVT Client Communication (CCC) Reference Library, the other is using in fact HTTP based messages that allow to even connect to the certain subsets of the server with simple web browsers also supporting user authentication for connections.
  8. What is so bad about a polling server? Do you foresee dozen of clients connecting to the server at the same time and loading it with large messages to be processed and answered? Otherwise a polling server can work quite well. If you really need to do potentially many simultaneous connections at the same time you might have to rethink your strategy anyhow. LabVIEW is not the ideal environment for heavy load network servers and even in C it requires some very careful programming to not run into thread starvation, and/or process creation overload in such situations. The Apache webserver uses some very sophisticated and platform specific code paths to allow handling many simultanous connections at the same time, most of which is not directly portable to LabVIEW.
  9. Most likely since it was added as a quick hack by someone from NI, to be used in a tool, that did not need to run as build executable. The person doing that 1) either didn't think it was useful in a build app 2) wasn't sure there might be certain implications in a build app, that might require extra work for this to work properly 3) made a mistake when adding that property so it does not work properly in a build app 4) or there may have been even some brainstorming and some very bright guys in the LabVIEW developer group identified at least one reason that this feature can't be working as intended in a built app without a lot of extra effort, and so the feature was intentionally disabled for build apps, and that might also be the reason that it was made private, since it is in fact an unfinished feature.
  10. I think you need to distinguish a few things. First root loop and UI thread are close related but as far as I'm aware not exactly the same. Then not everything in VI server is blocked by the root loop, but Open Application and Open VI Reference surely are and any property nodes that operate on UI elements are executed in the UI thread. Once you have a VI reference open and can keep it open you should be able to invoke the VI remotely with Call by Refernece without blocking. Not sure about the asynchonous Call by Reference though. Do you see problems with the synchronous CbR or are you trying to do other things on the VI and application reference?
  11. It depends on the type of GPS device you are using, but most civil type GPS will only be accurate to about 50m, and that can get less depending on weather conditions such as clouds or rain.
  12. Aside from any encoding issues, it would be enough to change the line: System.out.println(b);[/CODE] [font=arial,helvetica,sans-serif]to[/font] [CODE] System.out.println((char)b);[/CODE] [font=arial,helvetica,sans-serif]The first invokes the println(int number) method, while the second invokes the println(char character) method. This of course will only work for ASCII characters up to decimal code 127 which LabVIEW should be normally generating, unless you use foreign language characters in the format specifiers itself.[/font][font=arial,helvetica,sans-serif]If you need proper character encoding too, you would have to wrap the input stream into an input reader of some sort.[/font]
  13. You will need some ground reference somehow (so two wires at least) and also some biasing of the signal with a resistor to pull it to the passive level. Otherwise you can't measure an open collector signal properly. The cheapest will most likely be one of the USB DAQ boxes.
  14. Well the name of the Windows DLLs has not changed between 32bit and 64bit Windows just because they wanted to avoid to having to change all DLL names everywhere when referencing DLLs dynamically by name. The 32 in the name is a left over artefact since all these DLLs also had the same name without 32 in Windows 3.x. Back then when moving to 32 bit architecture the MS developers chose to use a distinctive name to avoid name collisions. When moving to 64bit, MS decided to use different base directories instead and leave the DLL names alone. So VIs accessing system DLLs can work both on 32bit and 64bit Windows but you need to be aware that some parameters can actually change in size. For instance any HANDLE datatype (almost every WinAPI datatype starting with a H) is a 32 bit entity in 32 bit Windows and a 64 bit entitiy in 64 bit Windows. So the right data type to use for such parameters is the (unsigned) pointer sized integer. If you use a normal 32 bit integer it may still work on resource constrained systems but will sooner or later fail badly when your system has more memory and the value of handles can go above 4GB address range. Same for any pointer that you decide to treat as integer type for whatever reasons and there are also a few other Windows datatypes that can change bitness.
  15. While ActiveX has the feature to also launch the application server there is no reason why you can't use the TCP/IP interface, and I'm pretty sure they do that for multiplatform reasons. You simply have to first launch the executable with System Exec yourself. The entire VIPM stuff is likely a bit involved and complicated, enumerating the installed LabVIEW versions from the registry, finding their install path, reading (and possibly manipulating) the according LabVIEW.ini file to find out the TCP/IP server properties and then trying to connect to it and in case of failure start it with System Exec and try again to connect. But it's all doable although the details about timeouts to use when trying to connect can be a lot of trial and error.
  16. The most likely reason for this is that the concept of Recycle bin is not very consistent across the different platforms LabVIEW runs on (Windows, Linux, MacOSX) and works quite differently. Also technically the Windows Recycle Bin is in fact a tacked on feature to the actual Windows core, residing in the Windows shell component which is a very chaotic collection of interfaces, both COM and procedural oriented and shows that many people have added functionality over time with VERY different ideas of how to do it. There is no central authority making sure those interfaces are consistent nor interoperable, as every product division seems to have added their own gadget with their own preferred architecture into it. Also these interfaces often chance in incompatible ways between OS versions, with APIs being added, depreciated, changed and even removed at will.
  17. You came across the only real problem about re-licensing source code under open source licenses, be it BSD, GPL or whatever. That is that once it has been posted there are typically multiple authors and with that usually also copyright holders (unless it is a trivial patch submission) and in order to change the license or also distribute it under an additional license, you have to get the consent of all copyright holders. Otherwise, unless you have posted the original code under one of the rather artificially crafted licenses as brought up by Shaun, you are always free to grant other licenses under completely different terms, as you are still the copyright holder. A license does typically not relinquish the copyright (in fact the license itself never does but the license document can of course contain verbiage that relinquishes the copyright), that can only be done by a copyright transfer, either by trade such as an employment or "hire for work" contract or by a written statement to release the copyright for instance into the public domain. Also copyright typically expires a certain time after the creator has passed away. But copyright and licenses are totally different things. Copyright (at least in Western countries) automatically is instantiated at the moment when something is created. No registration or whatever has to be done to get it and only an explicit statement can give it away, but that is not what licenses are normally about and certainly not licenses such as BSD or (L)GPL. OpenG allows you to release your code into the public domain but not the OpenG VIs itself, unless you are the original author of them too. And mostly to AQ, although not so much as serious suggestion: Since the attribution clause is such a problem in terms of maintenance, why not getting rid of this altogether and scraping the sentence in the LabVIEW EULA that requires all LabVIEW applications to have an attribution clause to the fact that they are created with LabVIEW? That would save maintenance nightmares for all LabVIEW users too!
  18. It should AFAIK, at least if you as the application developer legally own the LabVIEW for Linux development system.
  19. By this reasoning you wouldn't be able to add anything from lavag either including this library. I find that a bogus reason. Strictly speaking even your already existing reuse libraries and even any newly developed VIs for a specific project would have to be considered unapproved under this aspect. Yes you can write unit tests and whatever else to get them stamped as some sort of approved, but so can you with 3rd party libraries. Not saying here you are wrong but trying to point out that the hysteric fear about contaminating something with open source and whatever is leading down a very slippery path for sure.
  20. The latest package was done by Jonathan Green. I guess he missed the platform limitations when porting the package file to VIPM. Personally I would say the LargeFile library doesn't really make sense in post 8.21 at all, and even in 8.20 only very little. I'm not using VIPM for package generation so I can't modify the package configuration myself.
  21. The availability for all platforms is definitely an error. Not sure when that got into the Toolkit. I only wrote the library years ago never really packaged it. As to LabVIEW versions, since LabVIEW 8.0 almost all file functions use 64 bit offsets and since about 8.20 they actually work mostly fine. So it makes no sense to include this library in a Toolkit that is >= LabVIEW 2009 only.
  22. For (L)GPL software (which applies to the DLL portion of the libraries that have them) you do need to mention it both in compiled apps as well as the source code (the second is logical as you can not remove existing copyright notices from the source code). All versions of the BSD license also have one clause for source and one for binary distribution and both of them apply independent of each other. Since the LabVIEW OpenG part is basically BSD the answer to your question would be therefore a clear yes! I would consider it enough to mention in the About box the fact that you use the OpenG libraries and then add a license.txt or similar file to the installation where you print the license text. If your app makes use of the OpenG ZIP, LabPython or other library using a DLL you should strictly speaking also add the relevant LGPL license text and point to the Sourceforge OpenG project where one can download the source code of those libraries.
  23. Actually it's not! You are right that the shared library will refuse to work if your clock is set after June 2010 or so, by simply posting a dialog at runtime. But the reason of the original refnum problem is that this library makes use of so called user refnums. These are defined by resource files that get installed by the package and in order for those refnums to be valid the according resource files have to get loaded by LabVIEW, which it only does at startup (at least I'm not aware of any VI server method to do that at runtime too, like the refresh Palette method the VIPM uses after installation of a new package). I'l be having a look at the library soon and see if I can do anything to resurrect it, but feedback has been very limited, so I was simply assuming that nobody was using it. Please note that the SSL support of that library is really minimal. It allows to get an https: connection up and running but lacks any and all support to modify properties and access methods of the SSL context to change its behavior or for instance add private certificates to it.
  24. In addition to what asbo said, testing with no name adapters is anything but conclusive. Some things can work, with a particular hardware device and depending what driver version you get installed and it then can not work after some seemingly unconnected changes like Windows updates. So even if NI can conclude one thing it may only be valid for that exact HW/SW combination on that computer and behave rather differently on other setups. Sometimes NI HW may seem expensive in comparison to no-name, or semi no-name asian low cost devices, but there is a difference in what you get. One is a hardware device where the people at NI actually sat down and wrote a specific driver for it by people who have some serious device driver development experience, the other is usually a minimalized copy of the reference design from the chip manufacturer with often a completely unaltered device driver from the same chip manufacturer. However reference designs are usually not meant to be end user sell-able items but developer platforms to "develop" a product with. The provided device drivers with those reference designs are at best a starting framework but seldom a fully featured device driver. FTDI drivers are an exception in that respect, as they already are pretty feature complete, although support for things like line breaks, no-data detection are still not something that I would rely upon from the reference design driver. It's the reason why FTDI based adapters are usually fairly functional for standard serial port applications, no matter what no-name producer sells them. Most no-name manufacturers wouldn't know what button to push to compile their own device driver!
  25. Yes hardware exposure on digital logic certainly helps the understanding. If you think of integers as a simple register or counter, the oddball in the mix is rather the signed integer than the unsigned due to its use of one's complements! Think about it. The MSB is the sign! -128 => 0x80 -1 => 0xFF 1 => 0x01 127 => 0x7F A naive approach to this could be instead (offset notation): -128 => 0x00 127 => 0xFF or (separate sign bit): -127 => 0x8F -0 => 0x80 0 => 0x00 127 => 0x7F The reason computer use the one's complement is that all the others are rather difficult to implement efficiently in logic for either addition and especially subtraction. I have seen such code too in the past and simply assumed that the original programmer either didn't think further than his nose, or may have used signed integers before, then in a frenzy to avoid coercion dots, changed them to unsigned without reviewing all the code and noticing the now superfluous positive check. In one or two cases the effective code was in fact in the negative branch of the case which of course never could happen, so that made me scratching my head a bit.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.