Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. I'm not going to make any promises here. But I'm working on it and tackled one more problem this weekend. As it seems it is now mostly some more testing for the Linux/VxWorks version and then wrapping everything up in an OpenG package. I might post a prerelease package here for testing and review and certainly will need some assistance in getting the final package uploaded to the VI network somehow.
  2. I have been slowly working to add a few new features to the lvZIP library and also make it Windows 64 bit compatible as well as support for additional RT targets. It's not quite finished yet, especially the support for UTF8 encoding of file names on non-Windows systems proofs tricky to make it work with the LabVIEW multibyte encoding, which can or can not be UTF8 on Linux systems depending on the system configuration. As soon as I get that working I plan to release a new package with support for Windows, Linux and NI RT targets. Sorry no Mac at this time, maybe I manage to get hold of a Mac at some point but that has low priority for me at the moment.
  3. TCP_NODELAY doesn't change anything about the underlaying TCP/IP protocol. It disables a feature of the socket that prevents the connection to cause to send a new Ethernet packet for every little data blurb a program might cause. Since the Ethernet TCP/IP protocol consists of multiple layered data packets each with its own header, an application trying to send a message byte by byte would badly congest the network if the Naggle algorithme wasn't there. And that is not an academic idea. The Naggle algorithme wasn't added because of an academic idea that presented a problem that didn't exist for real! But that algorithme is a limiting factor for command-response protocols that want to have more than 2-3 roundtrips per second. Now you can argue that such a protocol is actually very unsuited for the realities for interconnected internet, where the response time as reported by a simple ping easily can be in the 100ds of milliseconds. But if you want to do communication on a local network it can be helpful to be able to disable the Naggle algorithme. As to if UDP would help: I don't think so. UDP hasn't even a concept of acknowledgement of transmitted data. So it is even more according to the principle: blast it out and forget and don't ever care about who received it, and if any. The socket interface itself has traditionally no way to check the actual status of the network cable. Under Unix flavors there are some extensions available that allow to get the status of an interface node through the socket interface (but not through the actual socket of a connection). And those extensions can be quite different between Unix flavors and even between distributions of Linux alone. Under Windows this has to be done using the iphlp.dll, Winsock doesn't expose an API to query the interface nodes but only inplements the normal Berkeley socket API to handle connections/sockets.
  4. TCP/IP was specifically designed to be robust. This means that the protocol can handle disruptions of a connection and retransmission of data over the same connection or even an alternative connection route. So what you see is not a bug but you simply have different expectations than what is the reality. TCP/IP eventually has timeouts that will terminate a connection, if after some amount of retransmissions no ack or sync was received, but those timeouts are in the range of 10s of seconds. Once the connection has been put in one of the FIN_WAIT, CLOSE_WAIT or CLOSING states even the TCP Write will return with an error but not before. Here is a short introduction into TCP/IP operation and why this reliability can indeed pose some surprises for the unexpecting.
  5. It existed at least in 6.1 but you have to consider that while the node maintained appearance and principle operation, its implementation was many times improved to handle various more complex data transformations. Initially it didn't do much more than transforming a variant that was a pretty exact representation of the target data.
  6. Are you sure it gets NULL, or are you rather using the "Is Not a Path/Refnum/Number" to determine if it is valid? If it is the first case I would be stumped since the (internal numeric magic cookie) value shouldn't just change like that, but if you use the NaPRN function then that is indeed expected behaviour like dridpowell explains. In LabVIEW the lifetime of refnums is automatically terminated at the moment the top level VI in whose hierarchy the refnum was created goes idle. And while the numeric value of the refnum remains the same it is not a valid refnum anymore after that (and never should be able to get considered valid again in any feasable lifetime of the LabVIEW application instance).
  7. Considering your other thread about DMAing data around, this must be about the other extreme in performance! One UDP message per 4 byte floating point value!
  8. I would defnitely combine the numbers into one packet like this. this.facePoints3D = frame.Get3DShape(); // UDP Connection :: Talker :: Boolean done = false; Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); IPAddress send_to_address = IPAddress.Parse("172.22.11.2"); IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 80); while (!done) { int index = 0; byte[] bytearray = new byte[facePoints3D.Length * 4]; foreach (Vector3DF vector in facePoints3D) { Array.Copy(BitConverter.GetBytes(vector.Z), 0, bytearray, index, 4); index += 4; } try { sending_socket.SendTo(bytearray, sending_end_point); Console.WriteLine("Message has been sent"); } catch (Exception send_exception) { Console.WriteLine("The exception indicates the message was not sent."); } } //ends while(!done) statement Then on the LabVIEW side use a Unflatten from String with an array of single precision as datatype input. And of course not forgetting to set the data contains size input to false.
  9. Yes you signal the event from your C# to LabVIEW, however to do that you will have to link dynamically to the LabVIEW.exe (or lvrt.dll for build executables or lvrtf.dll for special builds) and treat it as unmanaged API. But that poses the question why even bother about virtual/shared memory when you have an event that could carry the data anyhow. Calling of PostLVUserEvent() from within LabVIEW, while in principle possible, is simply the Rube Goldberg variant of using a "Generate User Event" node in your code.
  10. Actually the C compilers I have worked with work a little bit different. Normally each .c (or .cpp, .cxx or whatever) source file results in an .obj file. These object files CAN be combined (really linked) into a .lib library file. When you use a single function from an object file, the entire object unit is linked into the resulting executable image, even if it contains about 500 other unused functions. These unused functions can reference other functions from other object units and cause them to be included as well even-though they are never actively called in the resulting executable. Only when you link with a library file, will the compiler pick out the embedded object units that contain at least a function that is used by any other included object file or the actual executable. It's likely that with careful linker scripts and all that you can nowadays hand optimize this step with some modern compilers but it's not what is normally done, since linker scripts are a rather badly documented feature. With all this said a lvlib (and lvclass) much more resembles a C object file than anything else, in terms of what gets linked into the final executable file. As such the term library is somewhat misleading especially when you compare it to the C .lib library file which is more of a real collection of units.
  11. Physical memory access is something you can't do on any modern OS without intervention through kernel device drivers. The entire physical memory access is highly protected from user space applications, and not just to pester users but to protect them. Also the term physical memory address is pretty unclear when you talk about external hardware resources that get mapped into the system memory through techniques like PCI or similar. The resources are typically allocated and assigned dynamically at discovery time (almost anything is nowadays plug and play hardware), which makes it completely unreliable to use fixed addresses even if you can access them through some device driver. You need additional functionality to discover and enumerate the hardware in question before use and query its current hardware addresses which can change with every boot or even plug and play event.
  12. Lua definitely too. It makes a clear distinction between input parameters and function return values. No passed by reference parameter values to return information from a function! Of course once you pass in objects like tables there is always the possibility for side effects from changes inside the function to these objects.
  13. Well the SCPI parser is beyond any resources that NI would be able to help with. But if it is about the instrument driver itself you should probably contact the instrument driver developer group at NI here. They can give you more information as to the requirements to get your driver certified and added to the instrument driver library as well as resources about recommended practices in such a driver to ease the process of certification.
  14. Unless you explicitly tell the deployment engine to include the VI source code in the built executable (for remote debugging for instance) it will be completely removed and only the compiled machine code and connector pane resource will be included. As such it is quite a bit harder to hack than a normally compiled object code executable, since the executable code is placed into the binary file in a LabVIEW specific way, not how MSVC or GNUCC would normally do it. The machine code itself is of course machine code as that is the only way to allow the CPU to actually execute it, but if someone goes to the effort to hack that, the only measure to prevent that is to put your system in a steel safe together with any copies of your source code and dump it above the Mariana Trench, if you get my drift. You can improve the obscurity a bit by renaming the VI names (and relinking the VI hierarchy) to some useless names, so that the VI names inside the deployed executable are just useless nonsense, but such a tool is not readily available and would have to be developed and then before each build invoked as a pre build step. The most simple way for that would be to load all top level VIs into memory and then recursively rename their subVIs to some random string and finally saving each of them. More advanced operations would require the use of semi documented VI server functions. But even self extracting encrypted executables won't stop a determined hacker, but at most slow him (or her) down for a few hours. They do check for active debuggers before doing the extraction, but there are ways to get around that too.
  15. Where did you install the .Net DLL to? Your calling process knows nothing about the secondary .Net DLL at all. It just tells Windows to load that primary DLL and lets Windows and .Net find out about how to handle the rest. However Windows will only search in very specific and well known locations for secondary dependencies. In the case of .Net DLLs the only locations that are always guaranteed to be found are the GAC and the directory in which the executable file for the process is located (e.g. C:\<Program Files>\National Instruments\<your LabVIEW version> for the LabVIEW IDE and for build applications the directory from which you started your app). Anything else is bound to cause all kinds of weird problems unless you know what you are doing and tell .Net explicitly where to find your assemblies otherwise. This requires calling various .Net API calls directly and messing up here can make it impossible for your app to load .Net at all.
  16. I've got started with a small LabVIEW library that implements the low level stuff such as locking the device and mounting/unmounting but then realized that it will only work if the application that calls it is started with elevated rights. This made me wonder if it is such a good idea to incorporate directly into a LabVIEW application as it would always have to be started with admin rights. That is besides inconvenient also pretty dangerous! It doesn't matter if you would call these functions from a specially crafted DLL that you call with LabVIEW or implement the small wrapper code to call the Windows API directly in LabVIEW. Personally I would feel more comfortable to call LabVIEW or the LabVIEW executable with normal rights and invoke an external command line tool through SystemExec with admin rights than run the entire application as admin.
  17. I'm a bit doubtful about that one. Memory Mapping is part of the NT kernel, but paths starting with "\\.\" are directly passed to the device manager interface. Not entirely impossible that it works anyways but I suspect some stumble-blocks there. The memory mapping code most likely will not observe sector boundaries at all but simply treat the underlying device as one continuous stream of data. This might cause troubles for certain device interfaces. Drat! One thing I forgot completely about this is that one can only open devices from an elevated level, aka explicit administrator login when starting the application. This might be a real killer for most applications in LabVIEW. And no there is no other way to write (or even read) disk images as far as I know.
  18. The src/disk.c file in there is more or less all that is needed. It does depend on QTWidget for its string and Error Message handling which is a bit unfortunate but could be fairly easily removed. But thinking a bit further it could even be implemented completely with the Call Library Node directly in LabVIEW.
  19. It's not yet released. Initially we had planned to add runtime support for NI realtime targets to LuaVIEW. This seemed a fairly straight forward exercise but turned out to be a major effort, especially with the recent addition of two different NI Linux realtime targets. It's in the works and we hope to get it ready for a release at the end of this year.
  20. Generally, "Request Deallocation" is not the right solution for problems like this, even if it works more agressive than it did in previous versions, which I'm not aware of that it does. You should first think about the algorithm and consider changing the data format or storage to a better suited way. LabVIEW is not C! In C you not only can control memory to the last byte allocated, you actually have to and if you miss one location you create memory leaks or buffer overflows. In LabVIEW you generally don't control memory explicitly and can't even do so to the byte level at all. It's similar with other high level languages where memory allocation and deallocation is mostly handled by the compiler system rather than the code programmer. Writing a program in C that is guaranteed to not leak any memory or do buffer overwrites ever is a major exercise that only few programmers manage to do after lots and lots of debugging. So having this pushed into the compiler which can be formally designed and tested to do the right things is a major step into better programs. It takes away some control and tends to cost memory due to things like lazy dealloaction (mostly for performance reasons but also sometimes as defensive means since especially in multithreading environments there is not always a 100% guarantee that deallocating a memory area would be a safe operation at that point). Request Deallocation basically has to stop any and all LabVIEW threads to make sure that no race condition can occur when a block marked as currently unused is freed while another thread attempts to reuse that block again. As such it is a total performance killer, not only because of causing many extra memory allocations and deallocations but also because it has to stop just about anything in LabVIEW for the duration of doing its work.
  21. Duplicate of post http://lavag.org/topic/18653-control-design-and-simulation-error-wire-is-a-member-of-cycle/ and Neil already was quicker and posted the solution with Feedback Nodes there. Yes, inserting a Feedback Node will allow you to connect wires in circles. You just need to be aware that this won't be an indefinite delta t but instead rather use your loop timing as dt since the Feedback Node will store the value from one loop iteration and feed it back to the input in the next loop iteration.
  22. Well the COM automation server should have been installed with a setting that specifies what threading model it is happy with. LabVIEW honors that setting. The common threading models that are available in COM would be: - single threading, this component can only be called from the main thread of the application that loaded the component. The LabVIEW UI thread coincidentally also is its main thread. - apartment threading, this component can be called from any thread but during the lifetime of any object it must be always the same thread - free threading, the component is fully threading safe and can be called from any thread at any time in any combination Most Automation servers require apartment threading, often not so much because it is required than simply because it is the default and many developers are to lazy to verify if their component would run with a less restrictive threading and more serious, if their component might actually require a more restrictive threading model.
  23. LabVIEW is a fully managed environment. As such you do not have to worry about explicit memory deallocation if you are not calling into external code that allocates memory itself. You're not doing anything like that here, but use LabVIEW functions to create those buffers so LabVIEW is fully aware of them and will manage them automatically. There is one optimization though you can do. The Flatten to String function most likely serves no purpose at all. It's not clear what data type the constant 8 might have and if it is not an U8 it might even cause problems as the buffer that gets created is most likely bigger than what you need and the Plan7 function will therefore read more data from the datablock. This would in the best case cause performance degradation because more data needs to be transferred each time than is necessary and could cause possibly errors if the resulting read operation causes to go beyond the datablock limit. And if it is an unsigned 8bit integer constant, a Byte Array to String function would do the same but more performant and clear.
  24. And store that second password in the application! Granted it would maybe get the lazy adversary to think he got the password already and then give up if it doesn't work but it would be almost no win against any determined adversary.
  25. Your title is misleading :-). It's only the consumer version of Windows 7 that are discontinued. Most Windows 7 computers sold nowadays are anyways Windows 7 Professional systems for the professional use. Microsoft promised to keep selling them and to give at least one year of notice before they stop doing that.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.