Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,909
  • Joined

  • Last visited

  • Days Won

    270

Everything posted by Rolf Kalbermatter

  1. QUOTE(Justin Goeres @ Nov 16 2007, 05:03 PM) Before LabVIEW 8.5 LabVIEW only could backsafe one version backwards. 8.5 is the first that can safe two steps backwards (8.2 and 8.0) and according to a presentation I attended at this years LabVIEW User Group they intend to maintain that or even better from now on since they seem to have refactored the versioning of VIs in 8.x in such a way that backsaving to older versions got a lot simpler. Probably also the reason why they removed support for loading 4.x and 5.x VIs in 8.5. Rolf Kalbermatter
  2. QUOTE(Götz Becker @ Nov 13 2007, 08:29 AM) Well obviously the ports were not closed properly since your application got killed. Windows does have some resource garbage collection that will usually close all resources that were made on behalf of an application when that application leaves the building. For some reasons the manner of how the LabVIEW application got discontinued, did somehow prevent Windows to properly garbage collect the winsock handles.There is probably not much you can do about that except hoping you won't have to kill LabVIEW again like that. Rolf Kalbermatter
  3. QUOTE(Cool-LV @ Nov 14 2007, 09:13 PM) Well, it's not getting much clearer, but if the password is the only problem you are having, that is only required for remote shares that have a password defined. If it is a share that is open for anyone to read, you wouldn't need the password to connect to it. For a share that is password protected, their is no way to connect to it without password. Rolf Kalbermatter
  4. QUOTE(tcplomp @ Nov 15 2007, 08:34 AM) Well not really, and they have much better things to do with their time. But in theory a few guys there have actually an idea how their compiler aligns machine code and where it puts it inside the binary VI structure so it would be possible to extract it and point a decent disassembler at it and guess from that the original code. But disassebling code is very hard, even more time consuming and in the end result you normally end up with a code construct that sort of resembles the functionality of the original code. However a machine code to LabVIEW code disassembler is definitely not available. The best you could get is some machine code to pseudo C code I would guess. Anything else would simply be way to much work, especially since the actual LabVIEW compiler evoles over time due to new VI elements as well as better code optimization. Machine code generated by LabVIEW 3 for sure will look diffferent than the same diagram compiled in LabVIEW 8.5. Rolf Kalbermatter
  5. QUOTE(Neville D @ Nov 9 2007, 01:24 PM) Windows can't do that. And to be honest I don't think any hardware except maybe some very special dedicated high speed routers would support that. The IP routing for such a system would get way to complicated with packets ending up being echoed over and over again. Also you would have to have two network cards on both ends anyhow and in that case what belets you to make them part of two different subnets each and maintain two different connections, one on each subnet? I think also that you expect a bit to much of Gig-E. The theoretical bandwidth of 100MB per second definitely is a lot slower for an end to end connection, both due to TCP/IP overhead as well as limited throughput in the businterface and even more so the TCP/IP stacks. They haven't really been designed for such high speeds and can't usually support them even close to the possible limit. Also the way data is handled in the LabVIEW TCP/IP nodes makes them also slower than what you can get in a more direct access to the socket library but that access is also a lot more though to manage than it is in LabVIEW. Rolf Kalbermatter
  6. QUOTE(kresh @ Nov 5 2007, 08:17 AM) Duplicate post: http://forums.ni.com/ni/board/message?boar...d=19997#M282599 Rolf Kalbermatter
  7. QUOTE(Tomi Maila @ Nov 6 2007, 02:52 PM) Better yet! Just use LabVIEW alone as library name, watch out this one is case sensitive! And then you won't have to worry about lvrt.dll or labview.exe depending on if you build an application or not. Rolf Kalbermatter
  8. QUOTE(crelf @ Oct 27 2007, 03:18 PM) Standardized works too, and for floating points NaN is probably the best approach, at least I do it that way if I do it at all. Of course it could fail if there is an actual calculation that is invalid and its result is wired to that input. But!! then having tested for NaN (and probably doing some default action) is still most probably the best course of action. In terms of subVIs I have not yet come across situations were I wanted really to know if a value was unwired, but quite often if it was valid at all, wired or not! Rolf Kalbermatter
  9. QUOTE(sprezzaturon @ Nov 1 2007, 09:22 AM) The standard developer edition does not include IMAQ Vision. It's an add-on to the developer edition too. And then comes the question isn't there some sort of discrepancy between LabVIEW Developer Edition + IMAQ Vision == WebCam? I would think so. That said if you have IMAQ Vision, the IMAQ for USB (which is a free and unsupported download) certainly would work. If you don't have IMAQ Vision I'm not sure where the analogy about coffee machine routines would be that LabVIEW might contain. Because your LabVIEW obviously doesn't contain real image manipulations too. Of course you can always use generic numeric analysis functions to work on images but by that definition every compiler out there should come with Webcam support. QUOTE I'm not expecting the world here. It's like buying the best DVD player in the world only to get it home and find out that it's buttonless and the remote is going to cost as much as the DVD player. Just a little vent of frustration... With nowadays DVD player street prices that is not even unlikely to happen . Maybe that could say something about what things might be wrong in this world, but that is more of a general problem and has not so much to do with LabVIEW. Rolf Kalbermatter
  10. QUOTE(daro @ Oct 27 2007, 09:10 AM) I really doubt that Vision Assistent 7.1 can control LabVIEW 8.2 to create VIs in it. The other way around I think would work but not a random number of versions backwards. Rolf Kalbermatter
  11. QUOTE(sprezzaturon @ Oct 22 2007, 07:01 AM) And I still can't control my coffe machine with LabVIEW!! Honestly you can always expect anything and everything but I don't think that makes it a legitime expectation just by itself. Most LabVIEW developers couldn't care about webcam access at all. If all you want to do is controlling a life webcam, then the LabVIEW Developers Edition certainly is not the right tool. Rolf Kalbermatter
  12. QUOTE(Panos @ Oct 30 2007, 08:16 AM) Cross posted at http://forums.ni.com/ni/board/message?boar...ssage.id=281182 Please mention when you do a cross post so people can know! Rolf Kalbermatter
  13. QUOTE(Giseli Ramos @ Oct 30 2007, 08:51 AM) The RtlMoveMemory function does not have to be a problem for you. LabVIEW also exports a MoveBlock() function that you can access with the Call Library Node. There is an example at http://zone.ni.com/devzone/cda/epd/p/id/3672 that shows exactly this and it works on every LabVIEW platform. But since you have the source of your shared lib, go for the changed API. It's clearer! Rolf Kalbermatter
  14. QUOTE(PJM_labview @ Oct 24 2007, 11:07 AM) Which VI/Xnode file would you suggest to look at as starting point for that? Rolf Kalbermatter
  15. QUOTE(tharrenos @ Oct 24 2007, 08:49 AM) There is no conversion really. The TCP nodes interpret the data that is passed to them simply as a stream of bytes, which a string is in fact too. So if you want to send binary data you simply create an array of U8 bytes the way you want your byte stream to look like and cast it to a string with the Byte Array To String node. If it is text your device expects you format this text into the string. Rolf Kalbermatter
  16. QUOTE(Tomi Maila @ Oct 22 2007, 02:43 PM) Apart from the already mentioned templates I don't think there is any way to have an Xnode work as a structure. They can be resizing nodes but that is about it. And if I was you I would try to patent it, before you publish it. Otherwise NI will do it :ninja: Rolf Kalbermatter
  17. QUOTE(tharrenos @ Oct 23 2007, 05:22 PM) Well how the Lantronix device converts TCP/IP into RS-232 will have to be defined by them. So read the documentation for the Lantronix device, bug their technical support and ask for some examples. And yes don't expect them to have LabVIEW examples. Much more than C is probably not possible, maybe one VB example too, but it should give you some ideas. Rolf Kalbermatter
  18. QUOTE(Michael_Aivaliotis @ Oct 5 2007, 03:42 PM) It's not that difficult once you now how ;-) Rolf Kalbermatter
  19. QUOTE(Justin Goeres @ Oct 17 2007, 08:10 AM) Well, that is not entirely true. The LabVIEW DSC Toolkit comes with an access control library with user configuration dialogs and login screens and a Wizard that can enable and disable controls in a front panel based on the currently active user from this access control system. It is quite powerful but the entire DSC Toolkit is not cheap and if it is only just for this user access control sybsystem it might be a bit overkill. On the other hand it is (or at least used to be, not sure about the latest status) something you could embed in your own applications without runtime license fees (not true for most other parts of the DSC Toolkit) and designing your own even remotely comparable access control system will cost a lot more in money and not have the ease of use of a front panel control configuration wizard. Rolf Kalbermatter
  20. QUOTE(vinayk @ Oct 19 2007, 01:14 PM) The error means as the error handler VI can tell you, that the VI is not in memory. Why you may ask! Because you think you provide a path to the VI to the Open VI primitive, but you don't. This VI will interprete a string input as VI name and nothing else. And when you provide a VI name only the VI must be already in memory. What you want to do is change the string constant into a path constant instead! And with that we are at the next problem. Using absolute path names to refer to VIs is ABSOLUTELY the worst you can do. This will immediately break when you move your project, or build an executable and/or install this on another machine. So reconsider how you want to refer to those dynamic VIs. The way I do this is having a VI somewhere whose location is relative to the dynamically called VIs, for instance in the same directory. Then this VI queries its own path and strips its name from that path and returns that. Now append to that path the VI name you want to load et voila. Rolf Kalbermatter
  21. QUOTE(Neville D @ Oct 11 2007, 12:10 PM) <br /><br /><br />That configuration should be stored in the LabVIEW INI file and reused on each load. Not sure about the current version of LabVIEW but you actually needed to restart LabVIEW in order to get that configuration active when Thread support was introduced in LabVIEW 5.0. Rolf Kalbermatter
  22. QUOTE(Gabi1 @ Oct 9 2007, 09:18 AM) Yes I do mean that and it's because of what you mention in your last sentence. A lot of work to create something which is hard to debug, maintain and use. I have some experience with this but with templates, since reentrancy was not possible for dynamic VIs back then. And it is a pain in the ###### whenever I have to go back and change something about it and then consequently test it too. Rolf Kalbermatter
  23. QUOTE(Smikey @ Oct 9 2007, 02:50 AM) I would say you are on the wrong board here. The SMX2064 not being a NI product, MAX can do little more than verify that the general PXI registers are there and some manufacturer identification is available. Other than that MAX does almost certainly not know how to access that board. Then the Visual Basic application: If it encounters a problem, you will have to take it up with the programmer of that application. Again NI and probably just about anybody else here can not even guess what the problem might be. Rolf Kalbermatter
  24. QUOTE(Gabi1 @ Oct 9 2007, 08:49 AM) Well aside from the obvious of having the VI simply in a BD and accessing that particular instance from there, no! Unlike non-reentrant VIs where the name of a VI is enough to reopen a VI reference to it, reentrant VIs maintain their data space attached to the VI instance (reference) and once you loose such a reference you loose the data space. But then using reentrant VIs through VI Server is usually more trouble than it is worth so I personally wouldn't really do it. Rolf Kalbermatter
  25. QUOTE(JFM @ Oct 8 2007, 07:54 AM) I'm not sure what version of Pharlap they purchased. But even if they got the source code for it, fiddling with the network socket code in an OS is a very tricky thing to do. Probably there would be a #define somewhere for the maximum allowable sockets and that is were it goes wrong. In many RT OSes the sockets are maintained in a static list and being static means it is preallocated at compile time and can never grow bigger than the max. Changing that into a dynamic list has possible performance drawbacks and makes the resource management in general more complex, so that is not something one wants to modify into an existing static implementation. Nice to hear that VxWorks seems not to have this limitation. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.