Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Yes you signal the event from your C# to LabVIEW, however to do that you will have to link dynamically to the LabVIEW.exe (or lvrt.dll for build executables or lvrtf.dll for special builds) and treat it as unmanaged API. But that poses the question why even bother about virtual/shared memory when you have an event that could carry the data anyhow. Calling of PostLVUserEvent() from within LabVIEW, while in principle possible, is simply the Rube Goldberg variant of using a "Generate User Event" node in your code.
  2. Actually the C compilers I have worked with work a little bit different. Normally each .c (or .cpp, .cxx or whatever) source file results in an .obj file. These object files CAN be combined (really linked) into a .lib library file. When you use a single function from an object file, the entire object unit is linked into the resulting executable image, even if it contains about 500 other unused functions. These unused functions can reference other functions from other object units and cause them to be included as well even-though they are never actively called in the resulting executable. Only when you link with a library file, will the compiler pick out the embedded object units that contain at least a function that is used by any other included object file or the actual executable. It's likely that with careful linker scripts and all that you can nowadays hand optimize this step with some modern compilers but it's not what is normally done, since linker scripts are a rather badly documented feature. With all this said a lvlib (and lvclass) much more resembles a C object file than anything else, in terms of what gets linked into the final executable file. As such the term library is somewhat misleading especially when you compare it to the C .lib library file which is more of a real collection of units.
  3. Physical memory access is something you can't do on any modern OS without intervention through kernel device drivers. The entire physical memory access is highly protected from user space applications, and not just to pester users but to protect them. Also the term physical memory address is pretty unclear when you talk about external hardware resources that get mapped into the system memory through techniques like PCI or similar. The resources are typically allocated and assigned dynamically at discovery time (almost anything is nowadays plug and play hardware), which makes it completely unreliable to use fixed addresses even if you can access them through some device driver. You need additional functionality to discover and enumerate the hardware in question before use and query its current hardware addresses which can change with every boot or even plug and play event.
  4. Lua definitely too. It makes a clear distinction between input parameters and function return values. No passed by reference parameter values to return information from a function! Of course once you pass in objects like tables there is always the possibility for side effects from changes inside the function to these objects.
  5. Well the SCPI parser is beyond any resources that NI would be able to help with. But if it is about the instrument driver itself you should probably contact the instrument driver developer group at NI here. They can give you more information as to the requirements to get your driver certified and added to the instrument driver library as well as resources about recommended practices in such a driver to ease the process of certification.
  6. Unless you explicitly tell the deployment engine to include the VI source code in the built executable (for remote debugging for instance) it will be completely removed and only the compiled machine code and connector pane resource will be included. As such it is quite a bit harder to hack than a normally compiled object code executable, since the executable code is placed into the binary file in a LabVIEW specific way, not how MSVC or GNUCC would normally do it. The machine code itself is of course machine code as that is the only way to allow the CPU to actually execute it, but if someone goes to the effort to hack that, the only measure to prevent that is to put your system in a steel safe together with any copies of your source code and dump it above the Mariana Trench, if you get my drift. You can improve the obscurity a bit by renaming the VI names (and relinking the VI hierarchy) to some useless names, so that the VI names inside the deployed executable are just useless nonsense, but such a tool is not readily available and would have to be developed and then before each build invoked as a pre build step. The most simple way for that would be to load all top level VIs into memory and then recursively rename their subVIs to some random string and finally saving each of them. More advanced operations would require the use of semi documented VI server functions. But even self extracting encrypted executables won't stop a determined hacker, but at most slow him (or her) down for a few hours. They do check for active debuggers before doing the extraction, but there are ways to get around that too.
  7. Where did you install the .Net DLL to? Your calling process knows nothing about the secondary .Net DLL at all. It just tells Windows to load that primary DLL and lets Windows and .Net find out about how to handle the rest. However Windows will only search in very specific and well known locations for secondary dependencies. In the case of .Net DLLs the only locations that are always guaranteed to be found are the GAC and the directory in which the executable file for the process is located (e.g. C:\<Program Files>\National Instruments\<your LabVIEW version> for the LabVIEW IDE and for build applications the directory from which you started your app). Anything else is bound to cause all kinds of weird problems unless you know what you are doing and tell .Net explicitly where to find your assemblies otherwise. This requires calling various .Net API calls directly and messing up here can make it impossible for your app to load .Net at all.
  8. I've got started with a small LabVIEW library that implements the low level stuff such as locking the device and mounting/unmounting but then realized that it will only work if the application that calls it is started with elevated rights. This made me wonder if it is such a good idea to incorporate directly into a LabVIEW application as it would always have to be started with admin rights. That is besides inconvenient also pretty dangerous! It doesn't matter if you would call these functions from a specially crafted DLL that you call with LabVIEW or implement the small wrapper code to call the Windows API directly in LabVIEW. Personally I would feel more comfortable to call LabVIEW or the LabVIEW executable with normal rights and invoke an external command line tool through SystemExec with admin rights than run the entire application as admin.
  9. I'm a bit doubtful about that one. Memory Mapping is part of the NT kernel, but paths starting with "\\.\" are directly passed to the device manager interface. Not entirely impossible that it works anyways but I suspect some stumble-blocks there. The memory mapping code most likely will not observe sector boundaries at all but simply treat the underlying device as one continuous stream of data. This might cause troubles for certain device interfaces. Drat! One thing I forgot completely about this is that one can only open devices from an elevated level, aka explicit administrator login when starting the application. This might be a real killer for most applications in LabVIEW. And no there is no other way to write (or even read) disk images as far as I know.
  10. The src/disk.c file in there is more or less all that is needed. It does depend on QTWidget for its string and Error Message handling which is a bit unfortunate but could be fairly easily removed. But thinking a bit further it could even be implemented completely with the Call Library Node directly in LabVIEW.
  11. It's not yet released. Initially we had planned to add runtime support for NI realtime targets to LuaVIEW. This seemed a fairly straight forward exercise but turned out to be a major effort, especially with the recent addition of two different NI Linux realtime targets. It's in the works and we hope to get it ready for a release at the end of this year.
  12. Generally, "Request Deallocation" is not the right solution for problems like this, even if it works more agressive than it did in previous versions, which I'm not aware of that it does. You should first think about the algorithm and consider changing the data format or storage to a better suited way. LabVIEW is not C! In C you not only can control memory to the last byte allocated, you actually have to and if you miss one location you create memory leaks or buffer overflows. In LabVIEW you generally don't control memory explicitly and can't even do so to the byte level at all. It's similar with other high level languages where memory allocation and deallocation is mostly handled by the compiler system rather than the code programmer. Writing a program in C that is guaranteed to not leak any memory or do buffer overwrites ever is a major exercise that only few programmers manage to do after lots and lots of debugging. So having this pushed into the compiler which can be formally designed and tested to do the right things is a major step into better programs. It takes away some control and tends to cost memory due to things like lazy dealloaction (mostly for performance reasons but also sometimes as defensive means since especially in multithreading environments there is not always a 100% guarantee that deallocating a memory area would be a safe operation at that point). Request Deallocation basically has to stop any and all LabVIEW threads to make sure that no race condition can occur when a block marked as currently unused is freed while another thread attempts to reuse that block again. As such it is a total performance killer, not only because of causing many extra memory allocations and deallocations but also because it has to stop just about anything in LabVIEW for the duration of doing its work.
  13. Duplicate of post http://lavag.org/topic/18653-control-design-and-simulation-error-wire-is-a-member-of-cycle/ and Neil already was quicker and posted the solution with Feedback Nodes there. Yes, inserting a Feedback Node will allow you to connect wires in circles. You just need to be aware that this won't be an indefinite delta t but instead rather use your loop timing as dt since the Feedback Node will store the value from one loop iteration and feed it back to the input in the next loop iteration.
  14. Well the COM automation server should have been installed with a setting that specifies what threading model it is happy with. LabVIEW honors that setting. The common threading models that are available in COM would be: - single threading, this component can only be called from the main thread of the application that loaded the component. The LabVIEW UI thread coincidentally also is its main thread. - apartment threading, this component can be called from any thread but during the lifetime of any object it must be always the same thread - free threading, the component is fully threading safe and can be called from any thread at any time in any combination Most Automation servers require apartment threading, often not so much because it is required than simply because it is the default and many developers are to lazy to verify if their component would run with a less restrictive threading and more serious, if their component might actually require a more restrictive threading model.
  15. LabVIEW is a fully managed environment. As such you do not have to worry about explicit memory deallocation if you are not calling into external code that allocates memory itself. You're not doing anything like that here, but use LabVIEW functions to create those buffers so LabVIEW is fully aware of them and will manage them automatically. There is one optimization though you can do. The Flatten to String function most likely serves no purpose at all. It's not clear what data type the constant 8 might have and if it is not an U8 it might even cause problems as the buffer that gets created is most likely bigger than what you need and the Plan7 function will therefore read more data from the datablock. This would in the best case cause performance degradation because more data needs to be transferred each time than is necessary and could cause possibly errors if the resulting read operation causes to go beyond the datablock limit. And if it is an unsigned 8bit integer constant, a Byte Array to String function would do the same but more performant and clear.
  16. And store that second password in the application! Granted it would maybe get the lazy adversary to think he got the password already and then give up if it doesn't work but it would be almost no win against any determined adversary.
  17. Your title is misleading :-). It's only the consumer version of Windows 7 that are discontinued. Most Windows 7 computers sold nowadays are anyways Windows 7 Professional systems for the professional use. Microsoft promised to keep selling them and to give at least one year of notice before they stop doing that.
  18. Believe me, alignment is the smaller of the problems he is encountering. The bigger problem is the lack of understanding C pointers, strings and all that and that LabVIEW strings are something very different than a C string. That together with proper memory allocation and deallocation rules for any pointer you use. I lost at some point my patience and found that ned was doing a better job in trying to teach him a little about C programming than I could bring up to do, so left it at that.
  19. Technically there would be a way to solve this in LabVIEW for almost all instances. The Windows API has generally ANSI functions and Unicode functions for all File IO functions. If they would choose to use explicitedly the Unicode variant internally and make the LabVIEW path to Windows path function a tiny bit smarter they could have solved that years ago for all OSes since Win2K. Since Paths are in LabVIEW a distinct datatype the entire modification could have been done transparently for LabVIEW diagrams without compatibility problems. Why they didn't do it back in LabVIEW 8.0 when they modified all file functions to work with 64 bit file offsets to allow for file access for files <2GB is still not clear to me. It would have been an invasive chance in the LabVIEW source code only with almost no chance for any negative influences for LabVIEW applications. Changing the File functions to use Unicode paths internally would IMHO have been a less involved exercise than supporting 64 bit file offsets throughout. Not to say that 64 bit offsets are not important (they are) but allowing paths to go beyond 260 characters is also a pretty important feature nowadays (and would have been a great opportunity to support 10 years ago when this generally wasn't a real issue for most computer installations).
  20. Well there is a chance that it is not really an out of process invocation. But a LabVIEW DLL consists of 3 things. 1) a stub loader 2) a C compiled wrapper for each function 3) the compiled LabVIEW VI (and subVIs) for each function The stub loader is responsible to locate the right LabVIEW runtime and load it but does skip that if it determines that the current calling process already contains a compatible LabVIEW runtime of the same version. This can be also the runtime environment of the LabVIEW IDE. This not only speeds up the initialization of the DLL but also allows more efficient passing of function parameters since otherwise all parameters need to be copied from the calling environment into the DLL environment, as Shaun has quoted already. So while the LabVIEW DLL is loaded into the process, it is not entirely sure if the LabVIEW runtime is also loaded into the same process if the stub loader determines that the current LabVIEW runtime kernel can not execute the VIs in the DLL because of version differences. While possible it certainly poses some difficulties in terms of heap management but out of process invocation would also cause its own kind of challenges.
  21. I regularly do to find hidden objects in a diagram of inherited code but normally ignore the useless unconnected terminals warning. This last one was maybe sort of useful in old days but nowadays with event structure and what else I often end up with terminals just laying around in the according event structure without ever needing to be connected. For the rest I'm with everyone else here: Don't treat an analysis result like race conditions as a breaking error. It's a serious warning (unlike unconnected terminals) but certainly shouldn't break code execution by its own.
  22. Geez, I completely forgot about asynchronous VISA. It has been years that I had to tinker with that because of flaky devices drivers.
  23. Are you using any Call Library Nodes in your code? If they are from NI libraries they are likely ok but anything else is suspect. Badly configured Call Library Nodes or buggy external shared libraries have the potential to create exactly such problems. And LVOOP has the potential to be extra susceptible to such corruptions, but unless your LabVIEW installation itself got corrupted somehow is by no means the cause of them.
  24. 10 MB per DUT fully multiplied by the number of DUTs! That make me believe that you might have been setting reentrant on all VIs just to be on the safe side. While possible and LabVIEW being nowadays able to handle full reentrancy this is not a very scalable decision. Reentrancy by parallel VI hierarchies is often unavoidable but that should be an informed decision on a per VI case, and not a global setting.
  25. Simply use an InitializeArray node with the datatype being U8 and length >=120, then convert it to a String with Byte Array to String. Changing the Call Library Node parameter to be an array of U8 would work too, but you would end up with a string that always contains 120 characters, but only the characters until the first NULL byte are meaningful. If you configure the Call Library Node to be a CStr pointer LabVIEW will take care on return to scan the string for this NULL character and limit the string length to be the correct size.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.