Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,917
  • Joined

  • Last visited

  • Days Won

    271

Posts posted by Rolf Kalbermatter

  1. QUOTE(Jim Kring @ Nov 5 2007, 04:37 PM)

    I've done a little bit more experimenting and found another quirk. Please see the attached sample project.

    Download File:post-17-1194298015.zip

    1) Open the "DotNetTest.lvproj" project.

    2) Note that there is nothing in the "Dependencies" node of the project.

    3) Open the Front Panel of "DotNetTest.vi"

    4) Note that "System" appears in the "Dependencies" node of the project.

    5) Close the Front Panel of "DotNetTest.vi"

    6) Note that "System" disappears from the "Dependencies" node of the project.

    post-17-1194298087.png?width=400

    And, here's some more info:"rendezvs", "semaphor", and "systemexec" are the names of Code Interface Nodes (CIN) that are called by the Renzezvous, Semaphore, and System Exec VIs.

    For example, these are the names as they appear in the output of the VI Server Application method "Linker:Read Info From File".

    post-17-1194298600.png?width=400

    That was my first guess starting to read this thread and I'm pretty sure you hit the nail on the head. The Dependancies scan in the project is almost certainly using the "Linker:Read Info From File" but probably only when loading a VI. This function is very fast in returning all relevant dependancies including CINs, external code libraries and such so it makes sense to use it instead of trying to do something in LabVIEW itself which would be WAAAAAY slower.

    I'm not sure it makes sense to show the CIN as a dependancy as it is really already embedded in the relevant VI and as such there is no physical file on disk that could be mapped to this item. So probably the filtering of the items should have taken out CINs or otherwise it should at least use a different icon.

    Rolf Kalbermatter

  2. QUOTE(silmaril @ Oct 23 2007, 03:53 AM)

    It would be interesting to see an example of this behaviour.

    Maybe it's an effect similar to what I saw in one project, where I printed a front panel as a report:

    First call: panel with default values was printed,

    Second call: panel with values from first call was printed,

    and so on...

    It seems that LabVIEW simply generated the image to print before updating the data in the controls.

    Fortunately, all the elements were in one cluster, so the solution was easy: I wired the cluster control directly into a local variable of itself and used a sequence structure to make shure the image was generated after this.

    Could this be the same thing you are experiencing?

    I can't support the original complain either, printing front panels quite often both to real printers as also to redirected PDF file printer drivers. But yes making sure that the front panel is drawn before you invoke the print operation is very important ;-). There are actually many ways to cause this, such as using an explicit Print Panel To Printer.vi I got from the NI site as well as print on completion of a VI, but I think I haven't used the later in quite some time. Also making sure the front panel is visible during printing might make a difference too, depending on the display driver. In that case decreasing the display speed optimization often can make it work even with hidden front panel.

    And if these things won't help (quite unlikely and probably the reason why this hasn't been adressed yet as the OP thinks, as addressing bugs that are hard to reproduce or are really more an operator error is something most developers react with a "Not a bug" label), changing the printing method could make a difference too. bitmap, Postscript, greyscale as well as changing printer drivers could actually give more information as to what could be wrong. Not all printer drivers work well in all print modes and that is quite often more a bug in the printer driver than in LabVIEW.

    Rolf Kalbermatter

  3. QUOTE(yogi reddy @ Nov 19 2007, 11:22 AM)

    Thanks for u'r response :worship: ,

    but,it isn't working yet, am getting error 56.

    I have changed the second TCP read timeout from 0 to 2000, then it has shown "LABVIEW memory is full". :headbang:

    Then your sender side is wrong somehow. From what I can see you are trying to send a 2D array of a size I'm way to lazy to find out by parsing the Matlab script, every 10 ms to each connected client. If this array is anything bigger than a few elements by a few elements you happily can throw data at the winsock library that has to exhaust your memory sooner or later.

    Why you try to send the same data over and over again with 10ms interval to each client is really beyond me. Just send it once when a new connection arrives and then close the connection for now. Also your error handling is sub par by far. Not adding the connection refnum back into your array after an eror is a good idea, but that doesn't mean that the refnum hadn't been valid and shouldn't at least be attempted to get closed. Otherwise you leak memory with every new refnum that gets thrown out of the bath due to some communication error on that refnum.

    Rolf Kalbermatter

  4. QUOTE(Cool-LV @ Nov 20 2007, 01:47 AM)

    OK, attached a Picture to tell details, the function is as the sam as press the disable button to diable connection, and press enable to reconnect it. thanks @!

    The post before your post explains that exactly. ipconfig can do that too. ipconfig /release [adapter name] will basically disable the network interface. ipconfig /renew [adapter name] will reconnect it.

    Of course there is certainly a way to do this accessing the Windows API, but the network enumeration API uses data structures that you most probably do not want to deal with in the Call Library Node, believe me.

    Rolf Kalbermatter

  5. QUOTE(liber @ Oct 26 2007, 07:58 AM)

    Hello,

    I've been using LVSERIAL library for several years now, and it worked great. The program is a wrapper for communicating with the Windows RS232 port driver; it frees you from using VISA. It written by Martin Henz.

    Recently we started using LabVIEW and LVSERIAL on Intel Core 2 Duo machines (specifically Dell Latitude D520), and our LabVIEW executable periodically crashes (after anywhere from 2 to 10 hours of operation) indicating an exception in one of the LVSERIAL functions ("comm read.vi"). The very same program running on single-core processors (Pentium M, Celeron M, Pentium III) never show any problems, and they run for many days at a time.

    If I replace LVSERIAL calls with VISA calls in the same program on the same Dell machine, the program becomes stable.

    I would still prefer using LVSERIAL over VISA because of simplified installation and support (in particular, not having to deal with the issues of aliases and visaconf.ini), so I would like to resolve the issue of crashes.

    Does anyone have any ideas as to what may be causing these crashes?

    Thanks,

    Sergey

    I'm not to familiar with lvserial and don't know it's internal details. Also I'm not sure what Visual C version and options Martin used to create the DLL. It is very much possible that the code generation used is not able to cope with multi core (and mayb hypertrheading) CPUs to well. If that would be the case, recompiling the DLL with more secure options and/or a newer Visual C compiler could help.

    Another possiblity is the use of globals or some other non-reentrant constructs in the C code and not using the Ui threading setting in the Call Library Node.

    But as Micheal said, that library is nice but it comes at the cost of not having an entire development team ready to deal with any issues that can come up due to new OSes, hardware or whatever the powers in the world can come up with.

    Rolf Kalbermatter

  6. QUOTE(akala @ Nov 14 2007, 12:19 PM)

    Do you know a way to install sillently the LabVIEW 8.0 run time engine in a client pc via HTML or PHP?

    No way with any recent OS! The ability to start any executable remotely through a HTTP connection is exactly what all Adware, Malware and other suspect programmers would like to have. A system that allows that will be invested with these kind of things in a very short time when browsing the web. So any user allowing that in his Browser settings is out of his mind.

    Rolf Kalbermatter

  7. QUOTE(paololt @ Nov 17 2007, 05:25 AM)

    Hi all, i need someone help me

    I have to realize a ping between two pc; i'm following an example i found in Labview example under the section udp networking.

    My problem is that: i have 2 strings to compare bit by bit: data sent and data received and then i have to calcolate bit error rate.

    how i can do that?

    thank you

    First your use of the word ping could be a little misleading here. It usually means a specific network procedure that is passing small packages using the ICMP protocol. ICMP is one of the more low level protocols directly above the IP package layer and there are no VIs to access that directly in LabVIEW.

    From what I see you are using UDP instead to do some bit error calculations. I'm not sure what you are trying to do by this but if it is about classifying hardware failures for instance in the network cards or network infrastructure your attempt is flawed greatly. UDP while connectionless and not guaranteeing data delivery is also based on the IP protocol and as such has already gone through some IP checksum and such too. So you won't really get much information about the involved hardware failure rate, but at best some indication about the ability of your network topology to cope with the amount of data you throw at it. For small buffers and not being on a congested corporate network this should be very close to 100%.

    Rolf Kalbermatter

  8. QUOTE(Justin Goeres @ Nov 16 2007, 05:03 PM)

    You'll have to do it in multiple steps. Each version of LabVIEW can save back to a certain subset of previous versions of LabVIEW.

    I don't have 8.2.1 open right now, but my guess is that it can only save back to LabVIEW 8.0. So you'd have to save the VI from 8.2.1 -> 8.0, then open it in 8.0 and save it from 8.0 -> 7.1(.1), then open it in 7.1(.1) and save it from 7.1(.1) -> 7.0.

    There's an outside shot that you could go all the way from 8.0 to 7.0 in one step, but my recollection is that you can't.

    You might be able to work out an arrangement with somebody around here to do it for you if you don't have those intermediate versions and if your project isn't huge.

    Before LabVIEW 8.5 LabVIEW only could backsafe one version backwards. 8.5 is the first that can safe two steps backwards (8.2 and 8.0) and according to a presentation I attended at this years LabVIEW User Group they intend to maintain that or even better from now on since they seem to have refactored the versioning of VIs in 8.x in such a way that backsaving to older versions got a lot simpler. Probably also the reason why they removed support for loading 4.x and 5.x VIs in 8.5.

    Rolf Kalbermatter

  9. QUOTE(Götz Becker @ Nov 13 2007, 08:29 AM)

    Hi all,

    I just had a very strange LV (8.5) crash. I was loading 2 main VIs out of 2 seperate LVprjs. During the load of the second VI LV hanged. After killing LV and trying to restart LV I got the error that the VI-Server port 3363 was in use. No other LV was running at the time. The VIs have several .dlls which are also doing TCPIP on port 31415. These ports kept open too, after killing LV. A quick netstat /a /b showed:

    Proto Lokale Adresse Remoteadresse Status PIDTCP gnomeregan:3363 gnomeregan.xxxx.de:0 ABHÖREN 3052[System]TCP gnomeregan:31415 gnomeregan.xxxx.de:3451 SCHLIESSEN_WARTEN 3052[System]TCP gnomeregan:31415 gnomeregan.xxxx.de:3456 SCHLIESSEN_WARTEN 3052[System]TCP gnomeregan:31415 gnomeregan.xxxx.de:3455 SCHLIESSEN_WARTEN 3052[System]TCP gnomeregan:31415 gnomeregan.xxxx.de:3450 SCHLIESSEN_WARTEN 3052[System]TCP gnomeregan:31415 gnomeregan.xxxx.de:3452 SCHLIESSEN_WARTEN 3052[System]

    The strange thing was, that I couldn´t find the PID 3052 (checked with taskmgr and ProcessExplorer). Anyone seen such lost resources with no valid PID? The only fix I had was to reboot Windows (XP Pro, SP2).Any ideas how this could have happened or where to start searching for a reason?

    Well obviously the ports were not closed properly since your application got killed. Windows does have some resource garbage collection that will usually close all resources that were made on behalf of an application when that application leaves the building. For some reasons the manner of how the LabVIEW application got discontinued, did somehow prevent Windows to properly garbage collect the winsock handles.

    There is probably not much you can do about that except hoping you won't have to kill LabVIEW again like that.

    Rolf Kalbermatter

  10. QUOTE(Cool-LV @ Nov 14 2007, 09:13 PM)

    thanks all,

    and Sorry for the not clear question, OS is WinXP, the function that simulate open the local connection status, and press disable network, and press enable to enable the network, thanks

    upwards guy's suggestion that I haven't tried, because I see it needs user password to connect, should any tip be easier ?

    Well, it's not getting much clearer, but if the password is the only problem you are having, that is only required for remote shares that have a password defined. If it is a share that is open for anyone to read, you wouldn't need the password to connect to it. For a share that is password protected, their is no way to connect to it without password.

    Rolf Kalbermatter

  11. QUOTE(tcplomp @ Nov 15 2007, 08:34 AM)

    Are you sure about that?

    You mean NI can reverse engineer any VI? Or do they need to read the machine code and make very good guesses?

    Ton

    Well not really, and they have much better things to do with their time. But in theory a few guys there have actually an idea how their compiler aligns machine code and where it puts it inside the binary VI structure so it would be possible to extract it and point a decent disassembler at it and guess from that the original code. But disassebling code is very hard, even more time consuming and in the end result you normally end up with a code construct that sort of resembles the functionality of the original code.

    However a machine code to LabVIEW code disassembler is definitely not available. The best you could get is some machine code to pseudo C code I would guess. Anything else would simply be way to much work, especially since the actual LabVIEW compiler evoles over time due to new VI elements as well as better code optimization. Machine code generated by LabVIEW 3 for sure will look diffferent than the same diagram compiled in LabVIEW 8.5.

    Rolf Kalbermatter

  12. QUOTE(Neville D @ Nov 9 2007, 01:24 PM)

    I have a Windows XP PC with dual Gigabit Ethernet network cards in it. I would like to use one of them for reading TCP data from a remote (non-windows) computer, and one of them to write TCP data to the same remote machine.

    Any pointers or caveats? Would there be a substantial performance gain in separating out the read and write tasks to different network cards?

    Any other pointers on speeding up TCP reads? It currently takes about 50-100ms to read about 500k of data over Gig-E.

    Windows can't do that. And to be honest I don't think any hardware except maybe some very special dedicated high speed routers would support that. The IP routing for such a system would get way to complicated with packets ending up being echoed over and over again.

    Also you would have to have two network cards on both ends anyhow and in that case what belets you to make them part of two different subnets each and maintain two different connections, one on each subnet?

    I think also that you expect a bit to much of Gig-E. The theoretical bandwidth of 100MB per second definitely is a lot slower for an end to end connection, both due to TCP/IP overhead as well as limited throughput in the businterface and even more so the TCP/IP stacks. They haven't really been designed for such high speeds and can't usually support them even close to the possible limit. Also the way data is handled in the LabVIEW TCP/IP nodes makes them also slower than what you can get in a more direct access to the socket library but that access is also a lot more though to manage than it is in LabVIEW.

    Rolf Kalbermatter

  13. QUOTE(Tomi Maila @ Nov 6 2007, 02:52 PM)

    All memory management functions are available for shared libraries. CINs are not needed and I don't recommend using CINs. You can even call memory management functions from LabVIEW directly as if they were exported functions in a DLL by using LabVIEW.exe as the library in a library node for the development environment and the corresponding runtime DLL (LVRT.dll or something) for the runtime environment.

    Tomi

    Better yet! Just use LabVIEW alone as library name, watch out this one is case sensitive! And then you won't have to worry about lvrt.dll or labview.exe depending on if you build an application or not.

    Rolf Kalbermatter

  14. QUOTE(crelf @ Oct 27 2007, 03:18 PM)

    Well, I suppose that will work, but it means that you need to keep track of what's valid and invalid for each subVI that the checking algorithm is in, so keep it tracable.

    Standardized works too, and for floating points NaN is probably the best approach, at least I do it that way if I do it at all. Of course it could fail if there is an actual calculation that is invalid and its result is wired to that input. But!! then having tested for NaN (and probably doing some default action) is still most probably the best course of action.

    In terms of subVIs I have not yet come across situations were I wanted really to know if a value was unwired, but quite often if it was valid at all, wired or not!

    Rolf Kalbermatter

  15. QUOTE(sprezzaturon @ Nov 1 2007, 09:22 AM)

    ...which would be disappointing if LV included coffee machine routines but no way to access a coffee machine except by spending much more money.

    The standard developer edition does not include IMAQ Vision. It's an add-on to the developer edition too.

    And then comes the question isn't there some sort of discrepancy between LabVIEW Developer Edition + IMAQ Vision == WebCam? I would think so.

    That said if you have IMAQ Vision, the IMAQ for USB (which is a free and unsupported download) certainly would work. If you don't have IMAQ Vision I'm not sure where the analogy about coffee machine routines would be that LabVIEW might contain. Because your LabVIEW obviously doesn't contain real image manipulations too. Of course you can always use generic numeric analysis functions to work on images but by that definition every compiler out there should come with Webcam support.

    QUOTE

    I'm not expecting the world here. It's like buying the best DVD player in the world only to get it home and find out that it's buttonless and the remote is going to cost as much as the DVD player. Just a little vent of frustration...

    With nowadays DVD player street prices that is not even unlikely to happen :rolleyes: . Maybe that could say something about what things might be wrong in this world, but that is more of a general problem and has not so much to do with LabVIEW. :wacko:

    Rolf Kalbermatter

  16. QUOTE(daro @ Oct 27 2007, 09:10 AM)

    Thanks for your response.

    I first installed LV, then Vision, anyway I've delete all vision stuff and reinstalled.

    Yes, the vision is activated.

    I'll check the compatibility, thank you.

    I really doubt that Vision Assistent 7.1 can control LabVIEW 8.2 to create VIs in it. The other way around I think would work but not a random number of versions backwards.

    Rolf Kalbermatter

  17. QUOTE(sprezzaturon @ Oct 22 2007, 07:01 AM)

    Hello Siva,

    I'm getting started with Labview 8.5 Developer's edition and was disappointed that we spent all that money without any functions to access Webcams.

    And I still can't control my coffe machine with LabVIEW!!

    Honestly you can always expect anything and everything but I don't think that makes it a legitime expectation just by itself. Most LabVIEW developers couldn't care about webcam access at all.

    If all you want to do is controlling a life webcam, then the LabVIEW Developers Edition certainly is not the right tool.

    Rolf Kalbermatter

  18. QUOTE(PJM_labview @ Oct 24 2007, 11:07 AM)

    Well this have to be possible with the existing XNode stuff since I am pretty sure the timed loop (or timed sequence) is an XNode.

    Figuring out how to do what you describe though, is probably not going to be easy.

    PJM

    Which VI/Xnode file would you suggest to look at as starting point for that?

    Rolf Kalbermatter

  19. QUOTE(tharrenos @ Oct 24 2007, 08:49 AM)

    well I could adjust the Lantronix, according to labview conversion as soon as I find out how it works. I am still far behind from understanding how the labview converts data to tcp/ip. I tried to follow some examples but i couldnt understand the main idea. is there a book or any online docs i could read?

    Thank you

    There is no conversion really. The TCP nodes interpret the data that is passed to them simply as a stream of bytes, which a string is in fact too. So if you want to send binary data you simply create an array of U8 bytes the way you want your byte stream to look like and cast it to a string with the Byte Array To String node.

    If it is text your device expects you format this text into the string.

    Rolf Kalbermatter

  20. QUOTE(Tomi Maila @ Oct 22 2007, 02:43 PM)

    I've a vision of new certainly interesting LabVIEW functionality I'd like to write an implementation. The best implementation of this new functionality would require a smartly functioning custom structure similar to timed sequence in a sense that there needs to be multiple consequetive diagram parts in the new structure with XNode style inputs and outputs in each of them. However unlike timed sequence, the functionality of this new diagram would not resemble sequence at all.

    Now the question is that I need all the possible ideas that you guys may have how this could be implemented. I'm somewhat familiar with simple XNodes that can act as smart subVIs with scriptable content, terminals and visual appearance. However I've no idea how could I implement a structure that could contain multiple segments of code inside it, scriptable inputs and outputs at each segment. In addition I'd need to be able to control if/when the segments are actually executed.

    Sounds like a challenge. That makes it an interesting task... ;)

    Cheers,

    Tomi

    Apart from the already mentioned templates I don't think there is any way to have an Xnode work as a structure. They can be resizing nodes but that is about it.

    And if I was you I would try to patent it, before you publish it. Otherwise NI will do it :ninja:

    Rolf Kalbermatter

  21. QUOTE(tharrenos @ Oct 23 2007, 05:22 PM)

    I am almost done with the hardware implementation of the project but I still have querries on the labview. The hardware part consist of an Lantronix matchport, Atmel Mega64 MCU, Stepper motor controller and a stepper motor. I ll define a serial protocol within the microcontroller, lets say for example the value x will make the motor perform a resolution. My question is how TCP/IP the conversion works? is it something de facto? The Lantronix offers a ethernet to serial conversion supporting the 802.11 b/g standards. Do I need anything else than my laptop with the already builtin wifi card? All I could find on the internet were based on micrcontrollers that run a labview exe clients.

    Well how the Lantronix device converts TCP/IP into RS-232 will have to be defined by them. So read the documentation for the Lantronix device, bug their technical support and ask for some examples. And yes don't expect them to have LabVIEW examples. Much more than C is probably not possible, maybe one VB example too, but it should give you some ideas.

    Rolf Kalbermatter

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.