Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. It does depend a bit on the system you are working on. If it is the traffic control system of the National Railway company your better make sure that you don't add a fix that adds some regression in some seemingly unrelated area or you are likely to have been in the job for good. Our Dutch railway support organization just managed to create a country wide chaos this week when they installed a new version of the software in the system over the weekend which was supposed to fix something (or why wouldn't they even change the software?). Monday, the control system for the signals in the most important railway station of the country decided to shut down leaving the whole station virtually inaccessible for every train, causing a chaos over the whole country and beyond. The comment the next day in the newspapers was in the line of: We are very sorry, but we saw that already coming on Sunday after we installed the update! I'm sure the new software passed all regression tests they have (I can't believe they wouldn't do that for such a system ) but somewhere somehow something fell through the cracks that when it was stress tested on Monday morning in the commuter traffic, simply failed.
  2. For Windows RT yes, for Raspberry PI running Linux not necessarily. NI Linux RT is already ARM based. There might be problems since the NI Linux RT devices use an ARMv7A based chip while the Broadcom on the Raspberry Pi uses an ARMv6, but the CPU on the new model 2 also seems to support ARMv7. So even that might be working. The only two problems about this are: 1) It's going to be quite a bit of work to port the NI Linux RT sources to run spotless on the Raspberry Pi 2. 2) It's pretty unclear about the licensing implications for the LabVIEW RT parts. While the NI Linux RT kernel is fully open under the GNU license, which makes it fully legal to take it and make it run anywhere you want, this can't be said about the LabVIEW runtime on those targets and of course the many drivers such as DAQmx, NI VISA, NI RIO etc. Especially the RIO and VISA drivers are absolutely mandatory to make the LabVIEW runtime even work on a target and for the seamless integration into the LabVIEW development system.
  3. Actually the 64 Windows version is not the real problem. Haven't tested all but it compiles. What is the challenge is the various Linux variants at the moment.
  4. You're doing yourself injustice. LAVA was anyhow created before the certification program, but that put aside, you don't need a CLA to be recognized as capable of architecting a solution. I'm not sure you read the thread on the German board but the poster definitely has greater problems than just being able to use the less than well architected driver. A basic LabVIEW course or at least working through a few tutorials would help a lot. I can't bring up the patience Gerd so dutifully demonstrates in that thread.
  5. All NI email addresses are normally the <firstname>.<lastname>@ni.com. That should be correct enough to add to the sign up page for the Beta.
  6. Crosspost from the German LabVIEW forum which has already a lengthy support thread about that. Asking here the same question is not likely to give you more information than what you already got, except possibly information that is so Advanced in content that it won't help . LAVA = LabVIEW for Advanced Virtual Architects
  7. That might be a somewhat to categorical statement . Windows 10 is supposed to bring the worlds of Windows Desktop, Windows RT and Windows Phone even closer together, although I still have to see that happening. As such there is at least for the Phone and RT variant a possibility to run on ARM. However I wonder how licensing will work with typical Windows license costs many times the price of the Pi hardware. Of course the extra complication here is that LabVIEW for Windows is strictly x86/x64 so that won't help. You have a much higher chance to get the NI Linux RT kernel working on a Pi since it already supports Linux, so porting the NI Linux RT source tree to support the Pi hardware is "just" a smart pick and combine from the NI Linux RT sources and the Linux for RasPi sources.
  8. LabVIEW does not expose an interface that would allow to load .Net assemblies dynamically. It is actually even so that any .Net assembly that was at some point loaded, will be kept loaded which can be troublesome if you want to debug a self written assembly and try to replace the previous assembly with a new one without restarting LabVIEW. Therefore your best bet is to create a .Net wrapper assembly that implements your plugin interface with one additional method Initialize() or similar that takes a path to the actual .Net assembly and object instantiation you want to use. Then load that assembly in that method and instantiate the object interface you want and store the object pointer in a private variable. All the other methods and property accessors of your wrapper simply forward their call to the actual implementation in the initialized assembly object.
  9. Ohh, man. Now I know again why I didn't use the DOM Parser. This is insane and goes against all and every LabVIEW convention about destroying refnums. But there are other things in that API that make proper resource handling difficult to do.
  10. There is no general agreed method to close a process from another process. If it is a GUI application, sending it a WM_CLOSE message might work but for some applications just as likely might prompt with a dialog if you really want to close. Command line apps expect a ctrl-C to terminate prematurely but might disable the ctrl-C console handler for whatever obscure reason. The only way that will (almost) always work is to try to kill the process. Almost, because if the process is stuck in some call to kernel mode code it may never receive the kill signal.
  11. I'm aware of it. Figured that trying couldn't hurt, but can only discourage such attempts! And yes it's pretty upsetting and bad for the health although not everyone may develop ulcer from it.
  12. If you send me a nice modern Mac I might be able to work on that. My experiments with a MacOSX installation under Virtual Box were pretty abdominal.
  13. It's called inheritance. Node is a more generic object type than Document but a Document is also a Node. Node does not have the same properties that a document has. The node output of the Get First Matched Node.vi is the same refnum as the input, but since the VI uses a more generic Node refnum, LabVIEW silently coerces the Document refnum into a Node. Principially you could Typecast the Node refnum back into a Document refnum, but that is not really necessary here since you have the Document refnum already available in the VI. Also it is not a safe thing to do unless you know specifically that your Node is a document (only really valid for the root node). So what to do here? Close the Document refnum as soon as you don't need it anymore. The Node refnum out from Get First Matched Node.vi is NOT a different refnum but really the coerced Document refnum you pass into the VI.
  14. You could write for each its own VI library and then dynamically load whatever .Net DLL interface you want to have at the moment through VI server or even more elegantly LVOOP dynamic dispatch.
  15. Definitely have to echo that. If VISA doesn't work for your case only a very complex OOP architecture that will take you years to develop, would!
  16. I'm not going to make any promises here. But I'm working on it and tackled one more problem this weekend. As it seems it is now mostly some more testing for the Linux/VxWorks version and then wrapping everything up in an OpenG package. I might post a prerelease package here for testing and review and certainly will need some assistance in getting the final package uploaded to the VI network somehow.
  17. I have been slowly working to add a few new features to the lvZIP library and also make it Windows 64 bit compatible as well as support for additional RT targets. It's not quite finished yet, especially the support for UTF8 encoding of file names on non-Windows systems proofs tricky to make it work with the LabVIEW multibyte encoding, which can or can not be UTF8 on Linux systems depending on the system configuration. As soon as I get that working I plan to release a new package with support for Windows, Linux and NI RT targets. Sorry no Mac at this time, maybe I manage to get hold of a Mac at some point but that has low priority for me at the moment.
  18. TCP_NODELAY doesn't change anything about the underlaying TCP/IP protocol. It disables a feature of the socket that prevents the connection to cause to send a new Ethernet packet for every little data blurb a program might cause. Since the Ethernet TCP/IP protocol consists of multiple layered data packets each with its own header, an application trying to send a message byte by byte would badly congest the network if the Naggle algorithme wasn't there. And that is not an academic idea. The Naggle algorithme wasn't added because of an academic idea that presented a problem that didn't exist for real! But that algorithme is a limiting factor for command-response protocols that want to have more than 2-3 roundtrips per second. Now you can argue that such a protocol is actually very unsuited for the realities for interconnected internet, where the response time as reported by a simple ping easily can be in the 100ds of milliseconds. But if you want to do communication on a local network it can be helpful to be able to disable the Naggle algorithme. As to if UDP would help: I don't think so. UDP hasn't even a concept of acknowledgement of transmitted data. So it is even more according to the principle: blast it out and forget and don't ever care about who received it, and if any. The socket interface itself has traditionally no way to check the actual status of the network cable. Under Unix flavors there are some extensions available that allow to get the status of an interface node through the socket interface (but not through the actual socket of a connection). And those extensions can be quite different between Unix flavors and even between distributions of Linux alone. Under Windows this has to be done using the iphlp.dll, Winsock doesn't expose an API to query the interface nodes but only inplements the normal Berkeley socket API to handle connections/sockets.
  19. TCP/IP was specifically designed to be robust. This means that the protocol can handle disruptions of a connection and retransmission of data over the same connection or even an alternative connection route. So what you see is not a bug but you simply have different expectations than what is the reality. TCP/IP eventually has timeouts that will terminate a connection, if after some amount of retransmissions no ack or sync was received, but those timeouts are in the range of 10s of seconds. Once the connection has been put in one of the FIN_WAIT, CLOSE_WAIT or CLOSING states even the TCP Write will return with an error but not before. Here is a short introduction into TCP/IP operation and why this reliability can indeed pose some surprises for the unexpecting.
  20. It existed at least in 6.1 but you have to consider that while the node maintained appearance and principle operation, its implementation was many times improved to handle various more complex data transformations. Initially it didn't do much more than transforming a variant that was a pretty exact representation of the target data.
  21. Are you sure it gets NULL, or are you rather using the "Is Not a Path/Refnum/Number" to determine if it is valid? If it is the first case I would be stumped since the (internal numeric magic cookie) value shouldn't just change like that, but if you use the NaPRN function then that is indeed expected behaviour like dridpowell explains. In LabVIEW the lifetime of refnums is automatically terminated at the moment the top level VI in whose hierarchy the refnum was created goes idle. And while the numeric value of the refnum remains the same it is not a valid refnum anymore after that (and never should be able to get considered valid again in any feasable lifetime of the LabVIEW application instance).
  22. Considering your other thread about DMAing data around, this must be about the other extreme in performance! One UDP message per 4 byte floating point value!
  23. I would defnitely combine the numbers into one packet like this. this.facePoints3D = frame.Get3DShape(); // UDP Connection :: Talker :: Boolean done = false; Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); IPAddress send_to_address = IPAddress.Parse("172.22.11.2"); IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 80); while (!done) { int index = 0; byte[] bytearray = new byte[facePoints3D.Length * 4]; foreach (Vector3DF vector in facePoints3D) { Array.Copy(BitConverter.GetBytes(vector.Z), 0, bytearray, index, 4); index += 4; } try { sending_socket.SendTo(bytearray, sending_end_point); Console.WriteLine("Message has been sent"); } catch (Exception send_exception) { Console.WriteLine("The exception indicates the message was not sent."); } } //ends while(!done) statement Then on the LabVIEW side use a Unflatten from String with an array of single precision as datatype input. And of course not forgetting to set the data contains size input to false.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.