Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,909
  • Joined

  • Last visited

  • Days Won

    270

Everything posted by Rolf Kalbermatter

  1. QUOTE(siva @ Sep 20 2007, 03:33 AM) SVN really works the same. But the OP request was explicit intermediate commits to the server while away from the network, and here CVS will fail exactly the same as SVN. Only solution I see is to keep the CVS server on the local machine too and put it's repositry on an external harddrive or maybe make a regular backup copy of the entire local repository to some other location. I know that that will work with SVN since a repository is there completely self contained without any external information that could get lost when doing a simple file backup. Rolf Kalbermatter
  2. QUOTE(manic @ Sep 19 2007, 03:39 AM) If I would be tasked with that my reaction would be simple. Don't even try to redesign it! It will cost at least double the time than doing it from scratch and still not be as clean and performent as it should be. Rolf Kalbermatter
  3. QUOTE(Gabi1 @ Sep 19 2007, 04:04 PM) As Aristos already mentioned the extra memory used for a skalar notifier won't really kill your memory at all. The occurrences on the other hand are not producer/consumer oriented and while multiple places can wait on the same occurrence their mechanisme is not in such a way that you could easily do the opposite at all. On the other hand polling an occurrence with a small timeout, while not optimal, will not kill your CPU performance either. Rolf Kalbermatter
  4. QUOTE(karthik @ Sep 19 2007, 02:08 AM) I would think Property Nodes to be the magical solution here. Rolf Kalbermatter
  5. QUOTE(Louis Manfredi @ Sep 18 2007, 11:09 AM) Well, such is life. Most things I found by accident, searching actively for something or through user groups/mouth of word/LAVA/Info-LabVIEW etc. and when then looking for them they were eventually somewhere in the printed or online docs, but it is simply to much material to read through completely. I have long ago resorted to work with what I know and try to learn new things by various opportunities and simply not let myself be bothered that there are still many golden eggs out there that I do not know about. If I would have a photographic memory I would simply scan all the documents in a matter of flipping through them and rely on that, but alas, I'll have to do with what I can. Rolf Kalbermatter
  6. QUOTE(Cherian @ Sep 12 2007, 01:58 PM) Wow, you mean you wrote a driver for the FPGA board to communicate with it from within LabVIEW? In that case I would assume you know more about how this could be done, than anyone else here on LAVA possibly could know. And probably also almost anyone else except those that work for NI and developed the FPGA product and maybe a system integrator or two under NDA. Rolf Kalbermatter
  7. QUOTE(tharrenos @ Sep 16 2007, 11:18 AM) As far as LabVIEW is concerned there won't be any problem. WiFi cleanly integrates into the OS networking stack and as such just looks like any other Ethernet hardware interface to LabVIEW, meaning LabVIEW has absolutely no idea that the packets would go over a twisted pair, glasfiber, dial-up modem, or WiFi network. The central heating part might be a bit more troublesome. You will need to have some hardware that can control the heating and at the same time has a WiFi interface too. As embedded device which would be interesting only for high production numbers that would require some engineering. If you just put a normal (small size) PC beside and run a LabVIEW executable on that too the necessary engineering would be quite limited. Rolf Kalbermatter
  8. QUOTE(Louis Manfredi @ Sep 13 2007, 01:21 PM) Yes upgrade notes would not have worked. They are both in LabVIEW as long as I can remember, which is about version 2.5 or maybe the parse enum wasn't in there but at least 3.0. Possibly 2.2 for Mac only, had it already too. Rolf Kalbermatter
  9. QUOTE(pjsaczek @ Sep 12 2007, 04:47 AM) As indicated already, if you talk to a serial device always add explicitedly the end of message character for that device to the command or if you want to be more dependant on a specific VISA feature set the "Serial Settings->End Mode for Writes" property for that VISA session to "End Char" and make sure you also set the "Serial Settings->End Mode for Reads" property and the "Message Based Settings->Termination Character" property accordingly as well as the "Message Based Settings->Termination Character Enable" to True. As already said it could have problems with older VISA versions (really old) and I find it better to explicitedly append the correct termination character myself to each message. Also note that since about VISA 3.0 or so the default for serial VISA sessions for "Serial Settings->End Mode for Reads" is already "End Char" and the "Message Based Settings->Termination Character" property is set to the <CR> ASCI character as well as the "Message Based Settings->Termination Character Enable" is set to true. However "Serial Settings->End Mode for Writes" property is set to "None" for quite good reasons as it does modify what one sends out and that can be very bad in certain situations. LabVIEW as general purpose programming environment shouldn't do that for you automatically since there are many somewhat more esoteric devices out there that use another termination character or mode then appending a <CR> character or <LF><CR> character sequence. Rolf Kalbermatter
  10. QUOTE(pjsaczek @ Sep 10 2007, 10:52 AM) No LabVIEW is not adding anything to the strings on its own and I would be really mad if it would. However if you use VISA in a more recent version (let's say less than 5 years old or so) you can configure it to do that for a serial session automatically for you. But personally I find that not such a good idea. I prefer to code each command in such a way to append the correct end of string indication explicitedly. Rolf Kalbermatter
  11. QUOTE(jed @ Sep 11 2007, 04:26 PM) The Windows message queue example is not really meant to hook directly into the queue but only monitor it. In order to hook into that event you would have to modify the C source code of that example to do that specifically. Not to difficult but without some good C knowledge not advisable. Another way might be that newer LabVIEW versions will send a filter event in the event handling structure "Application Instance Close?" which you can use to disallow shutting down the app. Not sure if it will disallow shutting down the session directly though. But it should be enough to detect that there might be a shutdown in progress and allow you to execute the command Adam mentioned to abort that. Rolf Kalbermatter
  12. QUOTE(dsaunders @ Sep 11 2007, 03:09 PM) The private property is somewhat a pain to use as it is really a number of informations you would need to check. First the connecter pane layout itself which is just a magic number for one of the patterns you can choose. Then the array of connections with an arbitrary number to identify the position in the connector pane it is connected too and last but not least the datatype of each connection which is a binary type descriptor and can't just be compared byte for byte but needs to be verfied on a logical level since different binary representation do not necessarily mean different data types. Not sure I would want to spend that much time for this! Another possibility and LabVIEW uses that a lot for its plugins is to use the Set Control Value method and then simply run the plugin VI. Makes the connector pane completely independant of the calling information. You just need the correctly named controls on the front panel. Using an occurrence (or notifier etc.) you can wait in the caller for the VI to signal its termination or simply poll its status to be not running anymore. Rolf Kalbermatter
  13. QUOTE(chrisdavis @ Sep 11 2007, 08:45 PM) My best bet is .Net although I think the WinAPI will also give you this information on a somewhat lower level and probably not exactly trivial to access. Possibly LabVIEW has some private hooks itself. Would make sense to have this info in LabVIEW somehow to make it platform independant. Rolf Kalbermatter
  14. QUOTE(Techie @ Sep 9 2007, 10:04 PM) All comments so far have been valid and right. One other thing, creating controls while in principle possible requires the use of so called scripting, a feature that has not been released by NI and made difficult to access too. Also you can not do it on a VI that is running, so for most user applications it is meaningless as you usually want to do such stuff from the VI itself. LabVIEW can not make any modifications to a VI that is running if these changes would modify the diagram in any way and dropping a control on a front panel does modify the diagram. That is also a limitation of the compiling nature of LabVIEW. Rolf Kalbermatter
  15. QUOTE(brianafischer @ Sep 8 2007, 01:50 PM) What version of LabVIEW are you using? LabVIEW since at least version 7 is fully multi monitor aware and moves windows so that some part of it's window is always visible although it does not center them in the primary screen which I would find to intrusive anyhow. Making an application open its windows always in the secondary screen independant of its location is defintely a very esotoric requirement. Personally I find it rather disturbing to have to look right or left of the main screen depending on the setup. It's already hairy enough to switch between single monitor and multiple monitor use. But I know many applications that will open their window always in the last saved monitor location independant if that monitor is present or not and I really find that an annoying behaviour, especially if they replace the default system menu too, so that you can't even move it from the task bar with the cursor keys. And upgrading to LabVIEW 8.5 could help you a lot. There you can configure per front panel on which screen it should be displayed and LabVIEW will honor that if that monitor is present. But i think this is mostly a runtime feature. Not sure it will have the desired effect on the windows in edit mode and definitely not for the diagram windows. Rolf Kalbermatter
  16. QUOTE(yen @ Sep 8 2007, 02:22 PM) In order to be able to connect to a LabVIEW application over AcitveX it also must have been build with that option enabled. I personally always make sure that option is disabled for various reasons, including safety. Rolf Kalbermatter
  17. QUOTE(yen @ Sep 5 2007, 01:36 PM) Just because there is a Basic development environment for this target does not mean that there couldn't be a (Gnu based) C toolchain for it too. Rolf Kalbermatter
  18. QUOTE(tcplomp @ Sep 7 2007, 02:38 PM) And just to show what the pre LabVIEW 8.5 version of this code would look like: http://forums.lavag.org/index.php?act=attach&type=post&id=6885''>http://forums.lavag.org/index.php?act=attach&type=post&id=6885'>http://forums.lavag.org/index.php?act=attach&type=post&id=6885 Forget about the unwired loop termination! ;-) That is not the point here. And if anyone wonders why one would write a VI in such a way. It is called pipelined execution and has advantages on multi core, or multi CPU machines as LabVIEW will simply distribute the different blocks onto different CPUs/cores if that is possible at all. On single core systems it has no real disadvantage in terms of execution speed but this construct of course takes a memory hit because of the shift registers that store actually double the data between iterations than what a linear execution would need. Rolf Kalbermatter
  19. QUOTE(Jim Kring @ Sep 7 2007, 10:22 AM) Just learned that yesterday at the local LabVIEW day here in the Netherlands, presented by Jeff Washington. His example had a loop but was about pipelined execution and boy I can tell you that although I'm excited about this feature, it does need getting used too. Basically with this node you sort of have to forget a few things about data flow and wire dependancy. And yes Jeff mentioned that the original Feedback node was implemented by an intern and they had thought he had chosen to implement it simply as a folded shift register but that seems to not have been the case and that is why it was much slower than a shift register. In 8.5 however Jeff claimed that the Feedback register should in all aspects we as user could possibly measure, behave exactly as a shift register. Probably there is also already an NI patent pending for it :-) Rolf Kalbermatter
  20. QUOTE(wbiker @ Sep 7 2007, 04:02 AM) Most likely you either call some external code incorrectly when this event occurres or probably also possible you do not close properly some functionality contained in external code which makes it hang. Another possibility is that you have multiple loops in your application and one of them instead of exiting too, starts to run freely without any asynchronous delay function in it anymore. But without seeing some of your code and preferably a seriously scaled down application that exhibits still this behaviour it is not possible to give you more specific advice. Rolf Kalbermatter
  21. QUOTE(TobyD @ Sep 6 2007, 10:32 AM) Or before there was VI server, like this: http://forums.lavag.org/index.php?act=attach&type=post&id=6875 Rolf Kalbermatter
  22. QUOTE(wbiker @ Sep 7 2007, 02:28 AM) By clicking the stop button on the toolbar you are not stopping the VI but really aborting it. If the program execution was at that point in an external code part you can end up with a locking situation since the external code may wait one some event message processing which will not happen anymore since the whol application was basically taken down brutally. Most NI drivers especially in newer LabVIEW versions are writting in such a way that they get informed by LabVIEW about aborts so that they can abort any waiting, but third party drivers usually don't have that, also because the means to be informed by LabVIEW about aborts are not really documented. But even NI drivers can still sometimes get stuck in such a way. The toolbar stop button is really just a last measure to abort a program or for quick and dirty testing but should not be used as normal way of starting and especially stopping a LabVIEW program. The LabVIEW program should have some sort of event processing with an explicit quit button in which case the event processing loop (and any other parallel loop that might be running) gets properly terminated after which you can clean up any DAQ, Instrument, IO-Bus, etc operation properly by closing those resources and once the last item on the diagram has executed the program stops too, but this time cleanly. Rolf Kalbermatter
  23. QUOTE(Karissap @ Aug 31 2007, 01:38 AM) Only if you are quite good at C and integrating that through a DLL into LabVIEW. The System Tray API in Windows uses callbacks and integrating that with LabVIEW is not trivial. ActiveX should be easier but I don't think there is a standard Windows ActiveX control that does System Tray. QUOTE(MikaelH @ Aug 30 2007, 08:17 PM) You can then create the "Icon tray" object set its properties/icons and then when a user interacts with the system tray icon the .net part fires call backs to LabVIEW. The .Net System tray API seems to have a problem somehow. I've seen several applications using that and they all seem to not remove the icon from the system tray when the application closes. Apparently there is no way to properly cause such an icon to disappear, especially when the application closed unexpectedly. Rolf Kalbermatter
  24. QUOTE(LV Punk @ Aug 29 2007, 06:26 AM) Since you plan on throwing away any but the last data anyhow a notifier may be better than a queue in this particular situation. Rolf Kalbermatter
  25. QUOTE(jlokanis @ Aug 29 2007, 12:43 PM) Well if you have a LabVIEW version before 8.0 somewhere you can copy the stuff from there. There was no lvlib before 8.0. Alternatingly I wrote long ago a library that works similar to the NI Eval Formula function but is implemented a bit differently dividing the evaluation cleanly into a parser that creates UPN stack intermediate code and an evaluater that operates on that intermediate code. I needed that for speedy calculations where user entered formules would be needed but they didn't change all the time, so parsing the formula once and then evaluating it over and over again had some serious perfomance advantages. The parsing is not trivial but a lot more streamlined than the NI code and the entire thing is also faster. It hasn't been updated for several years but worked fine for what I needed it. LabVIEW 5.0 ExprEval.zip Rolf Kalbermatter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.