Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Everything posted by Rolf Kalbermatter

  1. QUOTE(crelf @ May 4 2007, 09:46 AM) I've used a few times Inno Setup and was quite satisfied with it. For simple installations it is really easy to create a setup script and if you want to go further you can add real code based on a Pascal like syntax but that is really only necessary for your own specifc dialog templates or custom installation steps involving Windows API calls. And it is free too and often used by Open Source projects for their Windows installer. Rolf Kalbermatter
  2. QUOTE(BrokenArrow @ May 7 2007, 10:53 AM) Ohhh my god! My current projects are around 800 Vis easily, contain probably less than 20 globals, and also less than that in sequences (most of them being single frame sequences to enforce some dataflow dependency). Your problem is likely some serial port timing. If you have VIs that do some waiting inside and use that same VI for something else that something else won't be able to run before the first operation has stopped. However I thought the Serial Port Write.vi you mention was set to be reentrant so that should not be a problem. If it isn't making a copy with your own name and setting it to be reentrant will save you some time since you won't need to create individual VIs for each place. A reentrant VI really executes the same when used in multiple places like the same amount of individually named copies of that VI. The drawback of reentrant VIs is that you can not debug them well before LabVIEW 8 and basically not at all before LabVIEW 7. Rolf Kalbermatter
  3. QUOTE(bbean @ May 13 2007, 08:17 PM) Well you gave yourself in fact the answer already. The solution is to upgrade since it works in LabVIEW 8.2. It is likely a problem in Active X callback event handling inside LabVIEW, where you can do little short from rewriting the ActiveX component, which I assume is no option here either. NI does usually not release bug fixes to older LabVIEW systems once a new version is released (unless you own >500 licenses and therefore have a certain direct influence on the NI decision making). Rolf Kalbermatter
  4. QUOTE(BrokenArrow @ Apr 29 2007, 08:45 AM) No certainly not! By using property nodes to set that value of controls in abandunce you will likely create a program that feels very sluggish. For just one time intialization of the front panel this would probably be not to bad but it is very likely that once you start to use value properties intensely you use it all over the place and if you do that in a generic update state of your UI state machine it can already get bad. If you do that in a subVI through a control reference that is called all over the place in your program, you can probably sell a coffee machine too with your program so that your user can do something while waiting for the UI to update after each mouse click :-) Rolf Kalbermatter
  5. QUOTE(Tomi Maila @ Apr 19 2007, 12:54 PM) You could send the VI to Jim directly (or myself) and ask him to add it on your behalf. He (and me) will maintain your copyright notice. Or you get an account at sourceforge.net and ask Jim to have you added as developer to the OpenG Toolkit project. This will give you commit rights to the libraries but obviously this is not something you and everyone else would want to be granted to just about everybody. Once you start to commit your own VIs to the libraries there are a number of things to watch out although adding to existing libraries is really quite trivial. I usually leave the task of generating a package to be distributed to Jim for instance when he feels alright with it. Rolf Kalbermatter
  6. QUOTE(crelf @ Apr 21 2007, 05:08 PM) You can also read the first post in a row of posts to come about LabVIEW external code at http://expressionflow.com/external-code-in-labview-part1-historical-overview/' target="_blank">http://expressionflow.com/external-code-in...rical-overview/ to get an idea about what CINs and CLNs are about, why they exist and why you should use CLNs instead of CINs nowadays. Rolf Kalbermatter
  7. QUOTE(Pablo Bleyer @ Apr 18 2007, 03:12 PM) Your CIN likely makes use of some external functions located in a DLL. This could be a C runtime library or some non-standard Windows or third-party DLL. In the first case you can either make sure the runtime library installer for your development environment has been run on each computer you want to run your LAbVIEW program on or define in the CIN project to link with a statical C runtime library instead of a dynamical one to prevent problems on plattforms your compiler or depending tools hasn't been installed (this will make your CIN code resource considerably larger). In the second case you will have to find the non-standard Windows or third-party DLL you are calling and find a MS or other installer that will install it and make sure you tell any user to also install that package too. A CIN LSB is in fact a DLL too and a DLL that references other DLLs that can not be found by Windows will be refused to be loaded by Windows, leaving LabVIEW with no LSB to link into the VI. Rolf Kalbermatter
  8. QUOTE(Pablo Bleyer @ Apr 16 2007, 01:56 AM) The problem is not that the variant does not know what datatype it contains but the fact that LabVIEW is very strictly typed. And that is why there is a difference between a strict typedefed control reference and a non-strict one. These are decisions that are made at compile time and if the datatype of something can possibly change at runtime somehow then the only solution in LabVIEW to still be able to access that datatype at runtime are variants. If you make the refnum strict that is not necessary and if the refnum is non-strict it is necessary, otherwise LabVIEW could in no way allow you to access the Value property at all. To change that would require to change the entire LabVIEW system to be weak typed but weak-typing has other disadvantages one of them being that a datatype is usually always of some kind of variant type and the necessary (and usually quite time consuming) conversions are always done at runtime. Strict typing has the advantage that most type conversions can be done at compile time resulting in significantly improved performance at runtime. Other disadvantages of weak typing are the possible loss of information when converting multiple times between different types, or the fact that certain type conversions are simply not useful but you only see that at runtime instead of at compile time. Rolf Kalbermatter
  9. QUOTE(Jim Kring @ Apr 14 2007, 09:43 AM) Hmm, I tried that and it failed to install Tourtoise. Also that installer does nothing to install svnserve as a service in Windows, which is the main point in my post. Rolf Kalbermatter
  10. QUOTE(Gavin Burnell @ Mar 29 2007, 04:37 PM) Actually some well known NI person here stated once that he would rather like to make those castings impossible at all and has done so in the past for some areas as it is a very hacky way of sending LabVIEW into some crash course. There seems to be some extra security checks that when you try to access a typecasted property that doesn't exist for the actual object in question gives you an error message but he stated that this security check while there, is anything but fool proof. Rolf Kalbermatter
  11. QUOTE(dannyt @ Mar 5 2007, 04:00 AM) I personally feal these two issues are non-issues. If you distribute your program as an executable the source code is gone anyhow. So the additional security of not having the VI names as strings appear in the executable is really a minor thing. You can get the same by name mangling your VI names when building your executable. Performance is for all of my applications not in the subVI call overhead (most projects I develop or maintain currently have between 600 and 1000 VIs) but in the limited user reaction :-) or the limited reaction time of the external hardware systems I need to control. Also when I do calculations I do make sure to use inplaceness and preallocated arrays as much as possible together with shift registers and auto indexing. All these things allowed me to get performances for almost any algorithme, that was close to what you could get with normal C code without dirty optimization tricks and often is faster than many C routines I did find on the net to do the same (and obviously could be optimized by cleaning them up and use more proper functions or implementations). QUOTE Finally if instead of releasing an executable you release the actual project, I suspect this would not be the way to go. Obviously not. Because in the project you distribute the original source code, so at least the security concern would not play at all. Rolf Kalbermatter
  12. QUOTE(sam @ Apr 13 2007, 11:42 PM) LabVIEW 32bit will run. If and when there is a LabVIEW 64bit is a top secret. It seems impossible to get anyone from NI to comment about that in any way and I doubt you want to resolve to illegal methods to get some NI person at gun point to tell you about that. Hardware support for Win64 is basically non existent for the moment. Rolf Kalbermatter
  13. QUOTE(Jim Kring @ Apr 12 2007, 11:07 AM) This is a great way using the local file protocol that SubVersion supports. However as of SubVersion 1.4 and higher you also have the option to setup an SVN server as service under Windows, which makes it startup and run automatically without even a user needing to be logged in. It does require a small amount of command line typing but is not really difficult at all. You just need to get the binary package of SubVersion for Windows and do the attached steps. Rolf Kalbermatter
  14. QUOTE(alukindo @ Apr 13 2007, 11:28 PM) You can disable DST in your OS. Other than that LabVIEW won't do anything different than what it does now since it uses OS functions to deal with date/time. Before LabVIEW 7.0 it always used the current DST status to convert date/time. From LabVIEW 7.0 on it uses the actual DST status that belongs to the actual timestamp for all timestamp on and after Jan. 1, 1970. There is however one limit to this that was uncovered with the recent DST change in the US. Windows only can have one DST period per timezone and that applies to all years. So there will be a two week period when DST calculation is off now for all years before this one in the US. Microsoft stated that they fix that in VISTA but have no intention to fix that for earlier Windows versions. In LabVIEW 8 and above some of the Date/Time functions have an extra boolean input that allows you to treat the data/time in UTC. Rolf Kalbermatter
  15. QUOTE(gleichman @ Apr 12 2007, 02:47 PM) This is absolutly right. There is no problem per se with calling VIs anywhere on the disk even in an executable. There are however some restrictions as to what such VIs need to conform too. 1) For the runtime system the VIs should have been compiled in the same LabVIEW version (bugfix version number differences are normally ok) and for the same platform. 2) You need to construct the path to those VIs exactly in order to be able to call/load them. 3) If they make use of SubVIs those subVIs MAY NOT clash in any way with subVIs that are used in the main executable already. Either you make sure they use a completely different hierarchy of subVIs (name prefixing for instance or LabVIEW names spaces in >= 8.0) or you make sure those subVIs are all exactly the same for the plugins as for the main executable. Number 2 is the first stumble block most people fall over. And once they figure that out they immediately run into 3, which can only be solved with a strong discipline while developping. I usually do that by having a Top Level VI which includes all Top level VIs of my project including the plugins and make sure to recompile the entire project before making a distribution of the executable and/or plugin component. Rolf Kalbermatter
  16. QUOTE(AdamRofer @ Apr 12 2007, 12:14 PM) Well it's a great way. And using the VI's from the large_file OpenG library would be also possible. Basically by wrapping the returned handle from CreateFile into a LabVIEW bytestream file refnum you could use the standard WriteFile primitive and CloseFile primitive. Of course this has a subtle chance of breaking in future LabVIEW versions but it is using all documented External Code Reference functions and should therefore remain working for quite some time. My prior approach does have one advantage of being able to print text to any Windows printer that can represent a bitmap device (that would exclude plotters for instance but I'm not sure Win32 still supports them at all. Win3.1 GDI did however.) It is to bad that FMOpen() seems to test the first path element to be only one character, otherwise no DLL call would be probably necessary at all under Windows. Rolf Kalbermatter
  17. QUOTE(AdamRofer @ Apr 12 2007, 12:51 PM) It's not very likely there is any method to do that. Rolf Kalbermatter
  18. QUOTE(Ed Dickens @ Apr 12 2007, 11:18 AM) Well I checked and BOOLEAN is an alternative Windows SDK definition and equals BYTE which is basically an 8bit value. Some Windows APIs use that for boolean parameters and return values, although not the standard Win32 API, which normally uses BOOL (a 32bit integer boolean). Rolf Kalbermatter
  19. Another way is this. It is however Windows only and uses the Windows GDI to print the text to any installed Windows printer. Strictly taken it is not sending the text to the printer at all, but instead renders the text through Windows GDI and sends the resulting image to the printer. It's a first attempt for some application I did recently where the Report Toolkit installation was no option. So I'm sure there are a few rough edges and optimizations which could still be improved. Rolf Kalbermatter
  20. QUOTE(MartinD @ Apr 4 2007, 06:56 PM) This is a standard problem when dealing with paletted bitmaps. The easiest solution is to translate the paletted bitmap into a non-paletted one (24bits for instance), do the edits you want to do and then if you want to go back to a paletted one use a color reduction algorithme. Doing this all in LabVIEW is quite a lot of work and number crunching and I would recommend to use applications that where actually created to do just this sort of thing. Adobe Photoshop is one of them, Paintshop Pro can certainly do it too, and if you want to go for free Gimp is an incredible package. But expect to learn a few things an get acquinted with each of those applications in order to do any meaningful thing with them. But trying to do it in LabVIEW is certainly an even much more labour intense task. Rolf Kalbermatter
  21. QUOTE(alukindo @ Apr 9 2007, 09:34 PM) I think you confuse some things here. A VI never needs to be set to reentrant in order to be able to call a CORRECTLY setup Call Library Node. If you need to do that you have a problem with the configuration of your Call Library Node not matching what the actual function needs. Calling ActiveX components in non single-threaded mode is an entirely different issue. An ActiveX component when installed is registered with the so called threading model it can work with. That is one of the reasons you should install an ActiveX component rather than trying to copy it to another system. Problems arise when this threading model does not match with what the ActiveX component actually is able to handle. If it says it is multithreading save LabVIEW will just assume that this is the case and not take any precautions to protect the ActiveX component in any way. This means the ActiveX component can be called from different threads in LabVIEW and if the component is not carefully written to handle this properly it will quite soon crash. Putting the VIs into UI thread will basically force LabVIEW to call the ActiveX component always from this single thread and solve the issue, but the problem is really in the registration of the component being bad. This might be not so apparent in other programming environments since it is quite a bit of work to actually make an application multithreading in them at all so it seldom happens and even if they use multiple threads the programmer tends to access a specific component usually always from the same thread. LabVIEW however simply has multiple threads and also uses them unless it is explicitedly informed not to do that for certain things. Rolf Kalbermatter QUOTE(Ed Dickens @ Apr 9 2007, 03:02 PM) I'm having problems getting a particular function ina dll to work. I'll try and provide all the needed documents and files hoping some one can point me in the right direction. Included in the attachment is the .dll, the VI I'm working on for this function, the documentation for the function, the header file for the dll and a sample C source file that uses this function. The problem I'm having is the function appears to return the data, but when it attempts to write the data to the indicator, LabVIEW crashes. The function is suppose to be run twice. On the first run you're suppose to have the "ProvidedBufferSze" parameter set to 0 and the "Buffer" = Null. After the run, "NeededBufferSize" will contain the needed bffer size that can be used on "ProvidedBufferSze" on the next call. The size of the structure containing the data is suppose to be 100 bytes according to the documentation. On the first run of the function, "ProvidedBufferSze" returns 194 with a two channel device in the PC. It seems to me this should be 200. On the second run with either 194 or 200 on "ProvidedBufferSze", the function (running with execution highlighting on), I can see the "Buffer" return a 2 element array. When this array is written to the indicator (or a probe), LabVIEW crashes and I get the Windows "Send Error Report" dialog. Mainly I just want to verify that I have the CLN setup correctly and my logic in the VI seems correct. After this, I'll probably end up calling the manufacturer to see what they have to say. Thanks Ed Well, you can't embed a string in a cluster and just pass that to the DLL. A LabVIEW string is something very different than what a C DLL would expect. Also the prototype clearly shows the string as a fixed size entity which means it is inlined in the structure and not even a C string pointer (which actually makes it easier for us to call it from LabVIEW if done right). So what you want to do is passing a flat buffer of bytes to the function with the needed size. Something like what is shown in the attachment. One afterthought. Your size numbers you mention indicate that the structure only uses 97 bytes for the structure. From what I can see in the header file the only thing I'm not sure about is the BOOLEAN datatype. The usual Windows type is BOOL which is a 32bit integer but BOOLEAN seems to be defined as a BYTE type. So obviously your structure would be only 97 bytes long and Softing seems not to use padding in their API which makes the buffer always a multiple of this value. This would mean you will have to adjust the attached VI slightly to get the right data from the buffer. Basically changing the 20 constant to 17 should already work although you may have to make sure that the embeed boolean gets interpreted right too. Rolf Kalbermatter
  22. QUOTE(TiT @ Mar 6 2007, 03:16 AM) Haha! 256MB is enough for XP if you run nothing else on it! Running and SQL Server and LabVIEW is definitely stretching the limit to an almost inacceptable point. Tell the customer that 512 MB memory is really not a lot for Windows XP and SQL Server alone and that LabVIEW also needs a bit of memory to work. Saving on memory nowadays with the prices being so low for hardware is simply stupid (But you better don't say that verbatim to your customer). Rolf Kalbermatter QUOTE(yen @ Mar 8 2007, 02:22 PM) 4 bytes = one I32 number = one reference. It sure sounds like you're opening a reference and not closing it. To check if this is a driver issue you can use the same driver and create a simple loop which runs many times and does the open-insert-close cycle. If there is a problem, you should see it even there. If not, can you upload the actual VI whose images you posted? A LabVIEW refnum typically takes more than 4 bytes. The refnum itself is a 32bit value but it points internally to a structure containing the actual management data associated with that refnum such as the OS handles and other necessary data for LabVIEW to manage that refnum. Rolf Kalbermatter
  23. QUOTE(JFM @ Feb 15 2007, 02:24 PM) INI file paths are always normalized when you use the LabVIEW INI Read/Write Path VIs. And yes the normalized form is basically Unix annotation. Since VxWorks has most likely unix roots I think the actual path string syntax would be correct. One more reason to deal with paths as much as possible using the LabVIEW built in Path data type! That solves any problems with plattform differences. To assume that CRIO is always one plattform only would be to limitating for sure, and as LabVIEW will get expanded to include more and more distributed technologies this only gets more true in the future. Soon you might communicate from within your LabVIEW development environment with a LabVIEW programmed fridge on some RT Linux operated 32 bit embedded chip ;-) integrating it into your web service application that runs on another remote system under Apache or something. Rolf Kalbermatter
  24. QUOTE(Val Brown @ Feb 16 2007, 02:27 AM) The CPU usage is an effect of the asynchronous operation of VISA nodes. It's a bit strange in fact since asynchronous is supposed to just safe CPU and in fact it does but in a strange way, due to LabVIEWs internal cooperative multithreading. Synchronous operation would lower CPU usage but block the diagram they are executing in until the operation is finished. The additional CPU usage is in so far not that bad as LabVIEW does relinquish the CPU to other tasks quite happily if they are there. If they aren't then yes it tends to grab quite a lot of the CPU to make sure to service the asynchronous call as fast as possible. Dropped connection errors such as -1073807194 certainly have to do with the driver somehow. It could be that the driver got reset by whatever reasons. Maybe an USB bus reset. -1073807343 is insufficient location error, meaning an invalid VISA resource. -1073807264 is a strange error for serial port as Controller in Charge is a GPIB term but as such also would point to a driver error. So yes from two of your errors I would conclude a serious serial port driver or hardware issue. The insufficient location error would point IMO to some wiring errors where you do not wire a valid serial port resource to the functions. The old serial functions were a bit more forgiving in this somtimes as they were just assuming a default number that pointed to the first serial port. But this was at the same time a serious software design flaw too. Rolf Kalbermatter Rolf Kalbermatter
  25. QUOTE(Thang Nguyen @ Feb 15 2007, 07:31 PM) It depends a bit on the connection type. For serial there is not much more than sending a command and waiting for the answer. If that doesn't return anything you can retry once or twice but then you should consider the connection broken and close and reopen it. TCP/IP can give you a connection closed error which you also should handle but that is insofar not enough that a broken network link does not necessary mean that you get this error. So for that you do basically the same as with serial too. Whatever you do you should not just do the classical LabVIEW error cluster handling where you do not do anything anymore once any error occurres. Instead you have to detect errors and actually do some error depending retry or reconnection attempt to get a stable and robust communication. Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.