Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. QUOTE(hviewlabs @ Nov 21 2007, 12:19 PM) That node has an input that accepts just about any LabVIEW type including a cluster. For non-cluster types the packtype has no meaning. But for cluster types the node will align the individual elements to that packing alignment. This is also where the node is actually a bit complicated I'm sure. The extraction of linear arrays or skalars is a no brainer really. The fact that an xnode apperently can have an input parameter that can truely adapt to just about any LabVIEW datatype would be the most interesting aspect of that node, but according to NI there is no such thing like an Xnode and therefore no self adapting datatype input. QUOTE(Godpheray @ Nov 26 2007, 09:48 PM) GetValueByPointer takes the C style pointer and the corresponding data type, which are generated by Import Shared Library Tool, as inputs and copies the value which the pointer points in shared library(dll/so/framework) to LabVIEW.Input terminals:Input Type: Input type is the LabVIEW data type to which you want to pass into LabVIEW. Input type can be Numeric, string, Array, Cluster . This VI returns an error if LabVIEW cannot convert the data wired to Pointer to the data type you wire to this input. If the data is integer, you can coerce the data to another numeric representation, such as an extended-precision, floating-point number.Pointer: Pointer is a memory address represented by a 32-bit unsigned integer in LabVIEW. Pack Type: Byte alignment information of the Input Type. Output terminals:Value:Value is the data copied from the memory which is pointed by Pointer and changed to the data type specified by Input type. One interesting aspect of that node seems to be the fact that one can convert a pointer into a string or array. I can see that converting into a string should be possible when simply looking for the NULL termination character, but turning it in an array would suggest to me that the input pointer can not just be any possible pointer but must be one prepared by DSNewPtr.vi. Otherwise that node has no way of knowing how big the memory area the pointer is pointing too could possibly be. There is not even a documented DSPtrSize function, so I guess DSNewPtr.vi must do some internal magic to the pointer so that GetValueByPointer.xnode can make sure to never overrun the end of the allocated memory area. Rolf Kalbermatter
  2. QUOTE(CraigGraham @ Nov 26 2007, 08:58 AM) There are two ways for doing this. One is with a simple FTP client uploading the file to the cFP replacing the original one. I haven't used that yet. Another one is to use the http://zone.ni.com/devzone/cda/tut/p/id/3937' target="_blank">RT System Replication VI library available on NIs site. With that you can build a LabVIEW app to install, control, and upload new RT exe files on the target. In one case recently I simply built that library into the system configuration for the host application. User simply puts a new exe in a special directory on the host, goes in there presses "update remote system", et voila. Rolf Kalbermatter
  3. QUOTE(Mike C @ Nov 27 2007, 09:22 AM) If you want to use the internet toolkit you certainly can go into the VI library and modify it in such a way to provide somehow some information in a global variable about the current size. The internet toolkit FTP VIs if you dig into them, does basically read multiple blocks inside a loop writing them to the desired file, until all data has been retrieved or the connection is aborted. There is no current progress information because the high level VI to download a file is written in such a way to only return on a finished download or on error. So there would be never a meaningful progress return value from that VI itself. The information you want to know is deeper inside that VI hierarchy and you will have to modify those VIs somewhat to give you that information in a meaningful way. Rolf Kalbermatter
  4. QUOTE(Jim Kring @ Nov 27 2007, 01:09 AM) According to the C++ example on MSDN it would seem that the EncoderParameter.value should be an Int64. So I simply tried to select the EncoderParameter(Encoder encoder, Int64 value) method instead of the EncoderParameter(Encoder encoder, Int16 value) and everything seems to be alright. At least I do not get any error anymore. Of course the Convert to Int16 could go away or be explicitedly converted to Int64 but LabVIEW will do that implicitedly too. Rolf Kalbermatter
  5. QUOTE(tumanovalex @ Nov 23 2007, 01:58 PM) Well, you obviously need to read a C text book about data pointers. The way you try to return the data pointer simply can't work. Allocating the buffer inside the function and assigning it to the array pointer does absolutely nothing in terms of allowing the caller to ever see that buffer. You would need a reference to an array pointer and not just an array pointer alone but because of memory management it is a very bad idea in general (and especially bad for use in LabVIEW) to allocate a memory buffer inside a function and return that to the caller. The way this is normally handled is by providing a possibility to either of the two possibilities: 1) Provide a way to retireve the necessary buffer size first, allocating the buffer in the caller (here LabVIEW) and then retrive the actual data. You could do this by providing first a NULL pointer as buffer parameter and in that case only return the file size. Then calculate the necessary buffer size in LabVIEW allocate that buffer as an array of the elements you require (double) and pass that a C array data pointer this time. 2) Allocate a reasonably sized buffer in LabVIEW first, pass the size of it together with the buffer itself to the function and then inside the function check if the size is big enough. If it is read the data into the buffer, returning the actual size filled in and return a success indication as function return value. If it is to small, return the actually needed size together with an error indication that the buffer was to small and then in LabVIEW reallocate the buffer with the necessary size and call the function again with the resized buffer. These two methods are not only the standard method for just about any C library but also the only reasonably compatible ones that will allow your code to be called from many different environments including LabVIEW. Rolf Kalbermatter
  6. QUOTE(hviewlabs @ Nov 20 2007, 09:03 PM) Try to use the library import wizard in 8.5. It can do some of these things for not to complicated structures, but if that does not work those functions won't help you magically. You will have to understand at least as much about C and pointers in C as you would need to write your own wrapper DLL in C to deal with that in a more clean and much easier to maintain manner. Rolf Kalbermatter
  7. QUOTE(hviewlabs @ Nov 20 2007, 09:03 PM) Try to use the library import wizard in 8.5. It can do some of these things for not to complicated structures, but if that does not work those functions won't help you magically. You will have to understand at least as much about C and pointers in C as you would need to write your own wrapper DLL in C to deal with that in a more clean and much easier to maintain manner. Rolf Kalbermatter
  8. QUOTE(hviewlabs @ Nov 20 2007, 08:14 PM) Those VIs were not designed (yet) for general use but are used by the Library Import Wizard in 8.5 which should support functions parameters to structures containing variable sized data. All the functions except the xnode are really just wrappers around LabVIEW memory manager functions. What is done through the library import wizard has been of course possible in LabVIEW since about 5.x and I did use it myself a few times but I went with purpose never into the details about how this has to be done, because of several reasons. - It is very complicated and even when explained step by step I would not expect anyone without a very good C knowledge to be able to understand. - Those that can understand this can also do it themselves. - The necessary work is tedious to do, complicated, error prone and oh well, throw in whatever you do not like about programming ;-) - For anything but a very simple case with one or two functions with a parameter with one or two pointers in it at most, it is a LOT more work than writing a wrapper DLL and it is a complete pain in the ###### to maintain. As to your question about PackType, I could imagine it could have something to do with memory alignement of structure elements. As such it would be a number that would be 1, 2, 4, 8, or 16. As to passing around references inside LabVIEW: You only can do that when you represent that reference as a pointer (i.e. int32/64) but not as a native variable sized LabVIEW datatype. Dataflow simply mandates that memory objects that can be resized get copied at wire branches, unless the LabVIEW compiler can optimize them in such a way that branches that consume the wire only fo reading without reuseing them are executed first. Doing an advanced optimizer that could go around that across structure boundaries might be theoretical possible, although I don't think it is currently done, but a macro optimizer that could deal with this task across (possibly dynamically related) subVI boundaries would be most probably a o^n problem or worse. Rolf Kalbermatter
  9. QUOTE(Yen @ Nov 13 2007, 12:32 PM) Making the Library path in the configuration read LabVIEW instead of LabVIEW.exe would help immensely to make this VI run on any LabVIEW platform And I have a somewhat more useful version of this I think.CTRL Size Front Panel Object.vi QUOTE(Norm Kirchner @ Nov 13 2007, 04:10 PM) Actually, NI beat you to it. I have to say that I'm flabbergasted that NI has had this available and never wrapped it up for us, nor have I seen anyone else wrap it up.Is it dangerous to use? It's used deep in some of the tools that LabVIEW comes with since quite some versions that are really just LabVIEW VIs too. I haven't found it to be dangerous and it does actually have some very nice features such as resizing list boxes even vertically on pixel level, something you couldn't do in LabVIEW 7.x with other properties (the only way that allowed resizing the height of list boxes was through number of lines and calculating that based on pixel size changes of the window was a pain in the a__. Some of my application use this to create a real autosizing of some of the UIs when the user resizes the window. Rolf Kalbermatter
  10. QUOTE(Norm Kirchner @ Nov 6 2007, 05:44 PM) Why not add a glyph to the column headers too and while we are at that support some standard build in sorting or better yet a plugin for a comparison routine that is used for sorting when clicking on a column header. Rolf Kalbermatter
  11. QUOTE(TiT @ Nov 8 2007, 03:15 AM) Hmm, does that pertain to your own perceived coding style somehow? I'm sure you are doing much better than that! Rolf Kalbermatter
  12. QUOTE(Yen @ Oct 21 2007, 04:01 PM) Actually the OpenG Builder and also the OpenG Commander (and quite likely VIPM as it uses for quite some part still the same VI libraries) does have VIs supporting symbolic paths. I created them back in the early days of OpenG Builder together with the lvzip library to support a flexible package installation system, that the ogp files are still mostly based upon. For the Commander they reside in OGPM PATH Convert Keyword.vi, OGPM PATH Keyword Expand.vi and OGPM Keyword Manager.vi.QUOTE(Aristos Queue @ Oct 21 2007, 08:31 AM) I won't get involved with the discussion about more symbolic paths to LV built-in stuff. Technical pros, technical cons, politics, blah, blah, blah...But as far as user-defined paths are concerned, there's a number of technical blocks to with having arbitrary definitions for symbolic paths as they're used on the internals of LV. But I could see someone creating a set of VIs that managed a path translation scheme for loading VIs dynamically from user-defined symbolic paths. Might be useful for one of you to put together. Only problem with that is that it would not pertain to the paths stored in VIs itself and to loading them as that requires access to LabVIEW interna that can not be dealt with with "App:Read/Write Linker Info from File" afaik. Rolf Kalbermatter
  13. QUOTE(Jim Kring @ Nov 5 2007, 04:37 PM) That was my first guess starting to read this thread and I'm pretty sure you hit the nail on the head. The Dependancies scan in the project is almost certainly using the "Linker:Read Info From File" but probably only when loading a VI. This function is very fast in returning all relevant dependancies including CINs, external code libraries and such so it makes sense to use it instead of trying to do something in LabVIEW itself which would be WAAAAAY slower. I'm not sure it makes sense to show the CIN as a dependancy as it is really already embedded in the relevant VI and as such there is no physical file on disk that could be mapped to this item. So probably the filtering of the items should have taken out CINs or otherwise it should at least use a different icon. Rolf Kalbermatter
  14. QUOTE(silmaril @ Oct 23 2007, 03:53 AM) I can't support the original complain either, printing front panels quite often both to real printers as also to redirected PDF file printer drivers. But yes making sure that the front panel is drawn before you invoke the print operation is very important ;-). There are actually many ways to cause this, such as using an explicit Print Panel To Printer.vi I got from the NI site as well as print on completion of a VI, but I think I haven't used the later in quite some time. Also making sure the front panel is visible during printing might make a difference too, depending on the display driver. In that case decreasing the display speed optimization often can make it work even with hidden front panel. And if these things won't help (quite unlikely and probably the reason why this hasn't been adressed yet as the OP thinks, as addressing bugs that are hard to reproduce or are really more an operator error is something most developers react with a "Not a bug" label), changing the printing method could make a difference too. bitmap, Postscript, greyscale as well as changing printer drivers could actually give more information as to what could be wrong. Not all printer drivers work well in all print modes and that is quite often more a bug in the printer driver than in LabVIEW. Rolf Kalbermatter
  15. QUOTE(yogi reddy @ Nov 19 2007, 11:22 AM) Then your sender side is wrong somehow. From what I can see you are trying to send a 2D array of a size I'm way to lazy to find out by parsing the Matlab script, every 10 ms to each connected client. If this array is anything bigger than a few elements by a few elements you happily can throw data at the winsock library that has to exhaust your memory sooner or later. Why you try to send the same data over and over again with 10ms interval to each client is really beyond me. Just send it once when a new connection arrives and then close the connection for now. Also your error handling is sub par by far. Not adding the connection refnum back into your array after an eror is a good idea, but that doesn't mean that the refnum hadn't been valid and shouldn't at least be attempted to get closed. Otherwise you leak memory with every new refnum that gets thrown out of the bath due to some communication error on that refnum. Rolf Kalbermatter
  16. QUOTE(Cool-LV @ Nov 20 2007, 01:47 AM) The post before your post explains that exactly. ipconfig can do that too. ipconfig /release [adapter name] will basically disable the network interface. ipconfig /renew [adapter name] will reconnect it. Of course there is certainly a way to do this accessing the Windows API, but the network enumeration API uses data structures that you most probably do not want to deal with in the Call Library Node, believe me. Rolf Kalbermatter
  17. QUOTE(liber @ Oct 26 2007, 07:58 AM) I'm not to familiar with lvserial and don't know it's internal details. Also I'm not sure what Visual C version and options Martin used to create the DLL. It is very much possible that the code generation used is not able to cope with multi core (and mayb hypertrheading) CPUs to well. If that would be the case, recompiling the DLL with more secure options and/or a newer Visual C compiler could help. Another possiblity is the use of globals or some other non-reentrant constructs in the C code and not using the Ui threading setting in the Call Library Node. But as Micheal said, that library is nice but it comes at the cost of not having an entire development team ready to deal with any issues that can come up due to new OSes, hardware or whatever the powers in the world can come up with. Rolf Kalbermatter
  18. QUOTE(akala @ Nov 14 2007, 12:19 PM) No way with any recent OS! The ability to start any executable remotely through a HTTP connection is exactly what all Adware, Malware and other suspect programmers would like to have. A system that allows that will be invested with these kind of things in a very short time when browsing the web. So any user allowing that in his Browser settings is out of his mind. Rolf Kalbermatter
  19. QUOTE(paololt @ Nov 17 2007, 05:25 AM) First your use of the word ping could be a little misleading here. It usually means a specific network procedure that is passing small packages using the ICMP protocol. ICMP is one of the more low level protocols directly above the IP package layer and there are no VIs to access that directly in LabVIEW. From what I see you are using UDP instead to do some bit error calculations. I'm not sure what you are trying to do by this but if it is about classifying hardware failures for instance in the network cards or network infrastructure your attempt is flawed greatly. UDP while connectionless and not guaranteeing data delivery is also based on the IP protocol and as such has already gone through some IP checksum and such too. So you won't really get much information about the involved hardware failure rate, but at best some indication about the ability of your network topology to cope with the amount of data you throw at it. For small buffers and not being on a congested corporate network this should be very close to 100%. Rolf Kalbermatter
  20. QUOTE(Justin Goeres @ Nov 16 2007, 05:03 PM) Before LabVIEW 8.5 LabVIEW only could backsafe one version backwards. 8.5 is the first that can safe two steps backwards (8.2 and 8.0) and according to a presentation I attended at this years LabVIEW User Group they intend to maintain that or even better from now on since they seem to have refactored the versioning of VIs in 8.x in such a way that backsaving to older versions got a lot simpler. Probably also the reason why they removed support for loading 4.x and 5.x VIs in 8.5. Rolf Kalbermatter
  21. QUOTE(Götz Becker @ Nov 13 2007, 08:29 AM) Well obviously the ports were not closed properly since your application got killed. Windows does have some resource garbage collection that will usually close all resources that were made on behalf of an application when that application leaves the building. For some reasons the manner of how the LabVIEW application got discontinued, did somehow prevent Windows to properly garbage collect the winsock handles.There is probably not much you can do about that except hoping you won't have to kill LabVIEW again like that. Rolf Kalbermatter
  22. QUOTE(Cool-LV @ Nov 14 2007, 09:13 PM) Well, it's not getting much clearer, but if the password is the only problem you are having, that is only required for remote shares that have a password defined. If it is a share that is open for anyone to read, you wouldn't need the password to connect to it. For a share that is password protected, their is no way to connect to it without password. Rolf Kalbermatter
  23. QUOTE(tcplomp @ Nov 15 2007, 08:34 AM) Well not really, and they have much better things to do with their time. But in theory a few guys there have actually an idea how their compiler aligns machine code and where it puts it inside the binary VI structure so it would be possible to extract it and point a decent disassembler at it and guess from that the original code. But disassebling code is very hard, even more time consuming and in the end result you normally end up with a code construct that sort of resembles the functionality of the original code. However a machine code to LabVIEW code disassembler is definitely not available. The best you could get is some machine code to pseudo C code I would guess. Anything else would simply be way to much work, especially since the actual LabVIEW compiler evoles over time due to new VI elements as well as better code optimization. Machine code generated by LabVIEW 3 for sure will look diffferent than the same diagram compiled in LabVIEW 8.5. Rolf Kalbermatter
  24. QUOTE(Neville D @ Nov 9 2007, 01:24 PM) Windows can't do that. And to be honest I don't think any hardware except maybe some very special dedicated high speed routers would support that. The IP routing for such a system would get way to complicated with packets ending up being echoed over and over again. Also you would have to have two network cards on both ends anyhow and in that case what belets you to make them part of two different subnets each and maintain two different connections, one on each subnet? I think also that you expect a bit to much of Gig-E. The theoretical bandwidth of 100MB per second definitely is a lot slower for an end to end connection, both due to TCP/IP overhead as well as limited throughput in the businterface and even more so the TCP/IP stacks. They haven't really been designed for such high speeds and can't usually support them even close to the possible limit. Also the way data is handled in the LabVIEW TCP/IP nodes makes them also slower than what you can get in a more direct access to the socket library but that access is also a lot more though to manage than it is in LabVIEW. Rolf Kalbermatter
  25. QUOTE(kresh @ Nov 5 2007, 08:17 AM) Duplicate post: http://forums.ni.com/ni/board/message?boar...d=19997#M282599 Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.