Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. I guess it could be integrated in LabVIEW in a similar way than Lua through LuaVIEW. But as current maintainer of LuaVIEW I don't see much merit in getting my time fragmented even more with yet another scripting interface for LabVIEW. The wiki page certainly looks problematic and the license doesn't make it the first choice for a scripting interface either.
  2. Why in all of the world do you want to use Windows messages to communicate between two VIs? This has only disadvantages, from being a platform specific solution, to making everything rather complicated, to being of low performance!
  3. Basically your Windows desktop is simply an application that is started up by Windows after the user logged in (or it logs in automatically). You can change the registry entry for this to any program you like, including a LabVIEW app. Needs some careful planning ahead, because once you do that, you mostly only can do things in Windows that your application provides an interface for. So if you don't plan some way to startup for instance the file manager, you might have locked out yourself for that account pretty effectively. One exception is Ctrl-Alt-Del which still works, but with some Windows API magic, this is quite easily remedied too. A computer that is tied down like this is pretty hard to get into in other ways than your Shell replacement app, but again watch out, this applies for you too, not just the operator noob.
  4. It's the low level device name notion for modern Windows systems before any disk drives are mapped on top of it, so I doubt that it is something LabVIEW does on its own explicitedly. Somehow somewhere Windows seems to have mount points or whatever that point to a hard-drive partition that doesn't currently or anymore exist on your system, and maybe LabVIEW tries to enumerate those disk drives or mount points at some point and triggers this error message. Try to do a registry search for the HD volume name, maybe it exists in there and may even point you more into a direction what could have caused this. Have you at some point used removable harddisks with NTFS formatting?
  5. Thanks for the clarification. In that case it seems like a good solution unless the directory rights silently get modified too. I should test that but am currently tied up with quite a bit of other things.
  6. Incidentally exactly the error codes I mentioned in my reply that you can't just always honor or ignore, as it depends on the situation where they occur, what you should do about. Timeout means usually that there simply hasn't been data and you should retry after a reasonable amount of time whatever you tried to do that gave you a timeout. This can be a Read, or a Connect or a Write operation. You should build some retry limit into it as it makes usually little sense to try endlessly. If after several minutes there is still no peer to connect, there might be a bigger problem like a disconnected network cable. Peer disconnected is another error that can happen because of network failure and that you can handle totally transparent by closing your network connection and attempting to reconnect again. Last but not least you should of course consider the timeouts you use for the various functions. When you do a connect with a 120 second timeout the connect will wait that long likely preventing your application to quit when you hit the quit button until it gets the requested operation (connection for a connect or data for a read), encounters and error or the the timeout occurred. This is probably the reason that you believe that you can only Ctrl-Alt-Del your application as the network functions simply sit in a timeout waiting for something. One thing that usually works to terminate most network functions in LabVIEW is to actually Close the network refnum that they operate on. This is not really good programming for normal network refnums and might fail sometimes but the perfect way to terminate the listener loop, when you would have a TCP Listener somewhere in your program. So wherever you handle your Application close request, get a handle on those network refnums and close them. That "should" make any network operation that is waiting on that refnum to return with an error. Better would be to make your network communication use much smaller timeouts and handle the close request themselves by polling a global close state controlled by your application close handler or if you start to do real software design some day , use some producer consumer framework throughout your application to handle those 10 loops correctly and in a fully controlled manner.
  7. What messages? How does your code look? Is auto error handling the only error handling you do in your code? Network communication is not something that you can simply expect to work always. Your code needs to be able to handle all kinds of possible errors such as timeouts, the peer disconnecting etc. in order to get a reliable operation. Depending on the error and your application you can sometimes ignore it (read timeouts for instance) or should close the connection and attempt to reconnect (client) or wait for another connection attempt (server).
  8. Haven't checked out how they did the silver controls, but being graphics and all I have a suspicion that it is either a Video Driver or maybe even specifically a GPU issue. Can you see if the different revision motherboards use some kind of different chipset, or at least revision thereof. What about the video driver?
  9. Please let us know what NI has to say about this. I'm going to be in this same spot soon and it would be nice to know why I need to distribute deployable licenses for a runtime app.
  10. Ok on my own machine I usually run with an account that is part of the admin group and the other computer is a standalone system with only one local user with local admin rights. But reading through it, it all makes sense. Since Vista, MS has tightened security according to the principle that a user should only have access to resources that do not influence other users on a system. This includes the entire registry except the user specific hive, the filesystem including all setting directories that are used by all users. So ProgramData logically belongs to that. I just added a small piece of code that checks in the configuration dialog the write rights for the configuration file, and if that is not there of the directory and only enables the Save button if this is true. A dialog informs the user that the application needs to be started as admin in order to be able to make changes to the relevant configuration items. Since Vista you have to differentiate between elevated rights and a user account that has admin rights. The difference is that in order to be elevated one has to start the process explicitedly as admin with an extra password login dialog, even if the current user is already part of the admin group. Your solution has one possible drawback. Only changing the rights of the config file alone is likely not enough as the directory needs to have its rights also changed. And that allows access to the directory to add more files to anybody, although it should be not possible to change existing files that don't have write rights for normal users.
  11. I want to know that too. I'm using Windows 7 Professional here in 32 Bit and 64 Bit on another machine and ProgramData/<Company Name> seemed to work quite well so far. Or where they trying to access it with hardwired names, instead of requesting the location from the OS? Or is it restricted for Guest accounts or similar?
  12. But you typically are not going to multicast 60MB files to many stations.
  13. To me it seems like an academic research project, and as such literally an "academic exercise". I'm sure such work is needed but I wouldn't even bother to think about using UDP to transfer larger entities that need to stay consistent. TCP is tried and proven for such things, so why it needs to be UDP is beyond me. It's not like UDP would be easier to route in restricted networks or such, so I simply don't see the benefit of going through the hassle of reimplementing some of the TCP handling on top of UDP.
  14. Without that ini token the executable may still get started by the Windows shell, but one of the first things LabVIEW does on starting up is to try to connect a different (same version) LabVIEW instance (through DDE communication) and if it finds it it will simply pass any command line commands to that instance instead and quit gracefully.
  15. To expand on Antoine's post besides the hidden or non-hidden location you should consider if settings are user or machine specific. Things like hardware configuration you probably don't want to maintain on a per user base, but other things like applications settings (data formats, paths, etc) your most likely will want to keep separate for each user.
  16. You haven't shown much so far that would have made me believe that you have much of a technical understanding of those protocols. And wanting to do something is a nice thing, but if it is a useful exercise an entirely different one. Considering some notices in the paper about how the difference in calling socket functions can have a significant influence in lost buffers, I think LabVIEW might be not the ideal environment to tackle this research. You have no control whatsoever, in how LabVIEW handles the socket calls. In addition it adds it's own network data buffering that you can only influence in limited ways from the LabVIEW nodes. And no I'm not talking about controlling socket attributes by calling socket API functions directly, but how LabVIEW calls recvfrom() and friends itself.
  17. To me it looks like IEC61850 is not about transfering large files consistently over the network. To do that over UDP is really not a smart idea as you have to implement on top of UDP something similar what TCP does already for you. An interesting side node to this: A while back I was looking at the network protocol used by Second Life and other compatible virtual world environments. They used to have the entire communication over UDP with their own sequence handling, acknowledgement and retransmission requests. The reason to go for UDP was supposedly the low overhead that was required to make movement and interaction in a 3D virtual world possible at all. However a few years ago they did introduce a new mechanisme for non-realtime critical messages and are porting slowly more and more of the messages over to this new mechanisme. It is basically a HTTP based protocol where the packet load is usually a data structure in their own LLSD format, which is in many ways similar to JSON. And yes I have implemented some basic messages in LabVIEW that got me as far as logging into a virtual world server.
  18. UDP is not a reliable protocol. Unless you intend to do A LOT of extra error handling, packet sequence management, lost packet retransmission and what else, you can forget to use it for transfering a file that needs to stay consistent.
  19. I'm not sure I would undersign this as a LabVIEW bug. LabVIEW doesn't instantiate new executables but the Windows shell does. For some reasons the shell thinks that the launching of the command has failed and seems to retry it in an endless loop. This looks more like a flaw in the handling of launching an executable than in LabVIEW itself, especially since the spawning of new processes seems to happen so quickly that LabVIEW hasn't even gotten a chance to already be loaded into memory and start doing anything at all, that could influence the Windows shell in such a way.
  20. There is nothing wrong with your explaining skills! There are however people who do not want things to be explained to them but instead things being done for them. Or English is not the native language but the posts look pretty ok to me in grammar, so why someone with such a grasp of the language never has heard about "chunks" is beyond me. Also while reading a file in chunks and sending it over UDP will probably work in a local network, I'm pretty sure the resulting file on the receiver end will be damaged after being transmitted like that over a more complex network infrastructure. That is unless one adds also packet sequencing code to the transmission (and in doing so more or less reimplements the TCP protocol on top of UDP).
  21. And going beyond 1500 bytes MTU is only an option if you are on a controlled network where you do know about all routers involved between the two parties. Once you go outside of that, such as an internet connection, the maximum MTU is entirely out of your hands and going beyond the default of 1500 bytes is really playing chances that the communication will actually work, unless you do a lot extra work in reassembling the incoming packages into the right order on the receiving end. And that is something that is even on embedded devices usually handled much more efficiently by the TCP protocol level, so why go to the hassle of UDP in that case? As to data transfer of LabVIEW TCP, it has some overhead in its internal buffer handling but I have in the past reached half the bandwidth of the network without to much trouble, as long as you don't do a two way handshake for every packet send. There used to be some challenges in older LabVIEW versions, with LabVIEW simply loosing data or even crashing on heavy network load (supposedly some rare race conditions in its socket handling), but I haven't seen such things in newer LabVIEW versions.
  22. Actually it is a little different. In LabVIEW < 5.0 a skalar boolean was a 16 bit integer and the most significant bit defined the boolean status and everything else was don't care. However boolean arrays where packed into words, so an array of <= 16 boolean would consume 16 bit. Now the history or this is of course presumably the MacOS which had somehow a similar notion, but the packing and unpacking of boolean arrays caused actually quite a bad performance for some operations. So LabVIEW 5.0 changed it to the more common 1 byte per boolean notion, which is what most C compilers use also as their default boolean implementation (although the C standard does nowhere specify what size a boolean has to be).
  23. Well like with any other DLL too, using the Windows API LoadLibrary() and GetProcAddress(). The bad thing about this is that it will make your SQLite library more Windows specific and less portable. My personal choice for this would be to create an extra C file that adds the necessary C functions and then either add it to the SQLite source, compiling my own LabVIEW friendly SQLite shared library, or in order to be able to reuse prebuilt SQLite libraries, create a separate DLL that contains these helper functions. Yes this requires you to use a C compiler for every platform you want to support your library on, but if you start to support more than one platform (and Windows 64 Bit is a different platform already) the maintenance effort of creating those helper shared libraries for each platform and maintaining the platform specific loading of those helper functions is quickly getting a real hassle in comparison to simply creating the helper library directly in C. And your LabVIEW VI library stays all tidy and clean, no matter what LabVIEW platform you want to support. BTW: I was just checking out Fossil from the same author. Looks like an interesting light weight DVCS application based entirely on an embedded SQLLite manager for its datastorage. Advantages: Integrated Issue Tracker and wiki, support for binary files although obviously without merging of branches, simple HTTP based network protocol with SSL support both for syncing but also interfacing with it from a different tool (like a LabVIEW plugin). And all that in a single executable that barely is larger than the SQLite database kernel itself. The disadvantage, if you can call it that, is the rather simple web based user interface to all this functionality.
  24. Are you trying to use the script node or the LabPython VIs? For the VIs pytscript.dll should not be required at all. For the script node it is required and it used to be necessary to install it in the <Program Files>/National Instruments/Shared/LabVIEW Run-Time/<LV Version>/script directory for the script node to work in an executable. Not sure if the application builder is trying to do that somehow in newer versions without really working it out correctly. The DLL is located in your <LabVIEW>/resource/script directory in the development environment. Try to add it from there into your project and add it as always included component, trying to install it in the according runtime subdirectory. When creating an installer you should add a script subdirectory in the LabVIEW Run-Time destination and specify for pytscript.dll to be installed into that directory.
  25. Thanks for all the feedback here. We certainly are listening to this and are considering the remarks seriously. I'm currently busy getting the LuaVIEW toolkit ready for a new release but the way LuaVIEW has interfaced to Lua, has caused some serious headache in getting it ported to Lua 5.1 (and Lua 5.2 will probably have a few more hurdles, especially on the script side as some differences between Lua 5.1 and 5.2 are subtle but could have rather far reaching effects for certain things). For those interested to get LuaVIEW working with the newest LabVIEW releases I posted a minor update version 1.2.2 to the website. It fixes the unit test crash for LabVIEW versions >= 2009 (which is really a crash in a LabVIEW node for the Motion Resource), and also has two minor errors fixed in the datalogging framework for those using it. No other changes have been made to it. We are planning to have a short beta program for LuaVIEW 2.0 on Windows and Linux within about 4 weeks and release the Toolkit officially for all platforms before NI Week. Interested people can send me a PM on this forum to apply for the Beta Test.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.