Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I see why you think that, yet this tends to be rather static data, unless you happen to startup and exit lots of service enabled NI and LabVIEW software applications. So not really that exciting to watch
  2. Actually a better fix is to apply the correct logic in ZLIB Store File.vi itself, since in there is already code to detect if the entry is a file or directory. I will look into this as soon as I have finished setting up the essential tools on my new machine, since the old machine is kind of dieing, I rather don't want to use it for any work anymore.
  3. Actually a correction to my earlier post. The VI library mentioned does not allow to see all the registered services but instead you can enter in your prefered WebBrowser: http://localhost:3580/dumpinfo?
  4. You could look into service names. The TCP and UDP functions in recent LabVIEW versions allow to wire a string to the port input which can define a service name. For server functionality this registers the service name with the dynamically allocated port number at the NI Service Locater service for that machine. For client functionality it queries the Service Locater for the port number for the specified service name. I saw in the LabVIEW 2010 configuration yesterday that you can specify a service name to be registered for the current LabVIEW instance and I'm sure that can be put into the executable configuration too. Not sure however how that works in combination with multiple instances of the VBAI since they all would use the same configuration file to startup. Also there is a VI library in vi.lib/Utility/ServLocater.llb that allows to interact with the service locater programmatically. It allows to query the service locater for all currently registered services, so you may be able to see if starting up multiple VBAI instances does register any service name and what pattern the service name would follow.
  5. Font is not equal to font. You have so called bitmap fonts and true type fonts. Bitmapped fonts can only be displayed in the sizes that the font file contains specific bitmaps for. Maybe that Windows 7 in the user interface supports resampling them nowadays, not really sure about that, but the result is always suboptimal, with artefacts such as blurry edges. However most printers still don't know how to resize bitmap fonts, and I would be surprised if the Report Generation Toolkit even attempts to support that. Courier New is a true type font and those are defined in vectorized form and can therefore much more easily be resized, and Windows supported resizing them basically from the start when they introduced that feature in a similar way than Apple had decided to do already earlier. True Type fonts are really developed in cooperation with Apple and Adobe, and Microsoft took them over with some help of Adobe, not without making some minor modifications to the meaning of some of the parameters in the font file, so that a one to one copy wasn't possible.
  6. A well, .Net. Always a possibility but annoying to use. In my searchs I have found that LabVIEW 3.x and 4.x did export a C manager function that would have allowed to do that in a CIN and after 5.0 with a Call Library Node but alas, the LabVIEW Gods at that time decided that this was an unneccessary private API and they removed it.
  7. I assume you would want to display a selection box with all the possible fonts listed. And I have to inform you that despite very deep searching in the depths of LabVIEW's basement and attic, I have not found a way to get this information during runtime into an application. My detective work has been carried out 2 or so years ago, so there is a small change that 2009 or 2010 have some hidden possibility for this, but I somehow doubt it.
  8. Well for fading you have to use alpha shading and alpha shading has the ostensible property of requiring the retransmission of the entire affected area with every step of the continuous shading effect. That is a lot of bitmap data! RDp or VNC or whatever similar functionality has a lot of optimization built in to reduce the amount of data that needs to be transmitted. I'm sure RDP transmits the actual GDI drawing commands rather than bitmap data whenever possible and when transmitting bitmap it most likely will optimize to only transmit the parts that change between picture changes. Yet with fading in EVERYTHING changes constantly!! Maybe they will at some point transmit the fading as a drawing primitive too, so the actual transmitted data is a lot less, but for now it is a bitmap transfer over and over again. And because it is a rather cosmetic only feature they sure won't declare it as top priority item as long as there are other areas that still need some improvement too.
  9. Exactly and you made already the specific limitation when you said "in cartesian space". If that would be all that is possible then your squangle would be indeed very useless, but we can imagine, and some can even calculate in, very uncartesian spaces, so your squangle has indeed a valid place. My statement was also more meant to provoke than to state something I believe in. Altruisme as this guy interprets it is very specific and as Yair already pointed out he even seems to ignore one interpretation he brought up himself too. Your pleasure pain equation has a lot of merits and can explain a lot of different behaviors but it mostly ignores any of the indefinite possible reasons that some people have such an extraordinary pleasure pain equation in comparison to the big mass. There seems to be something that defines that equation and I highly doubt that it is just a random collection of brain cells .
  10. So basically altruisme is a nothing, and should be deleted from our vocabulary, because it names something that according to this definition can't exist. Or we may have been looking at only one specific definition of altruisme so far, and are really missing the point altogether .
  11. Now we all are curious, as to what you did, that it suddenly worked!
  12. Well here are a few recent ones and also from February last year or therearound: http://forums.openg.org/index.php?showtopic=1997&view=findpost&p=5038 http://forums.openg.org/index.php?showtopic=1998&view=findpost&p=5039 http://forums.openg....findpost&p=3526 http://forums.openg....findpost&p=3527 http://forums.openg....findpost&p=5034 http://forums.openg....?showtopic=1104 (several by Retapitle) http://forums.openg....p?showtopic=994 (several by Retapitle) http://forums.openg....findpost&p=4851 http://forums.openg....p?showtopic=960 (last two posts) http://forums.openg....findpost&p=2816 http://forums.openg....findpost&p=3000 http://forums.openg....findpost&p=3008 http://forums.openg....findpost&p=3086 http://forums.openg....findpost&p=3118 http://forums.openg....findpost&p=5037 http://forums.openg....findpost&p=3010 http://forums.openg....findpost&p=3006 http://forums.openg....findpost&p=2998 http://forums.openg....findpost&p=2993 http://forums.openg....findpost&p=3063 http://forums.openg....findpost&p=4728 http://forums.openg....findpost&p=3067 http://forums.openg....findpost&p=2999 http://forums.openg....findpost&p=2996 http://forums.openg....findpost&p=3012 http://forums.openg....findpost&p=3004 http://forums.openg....findpost&p=2994 http://forums.openg....findpost&p=3002 http://forums.openg....findpost&p=3009 http://forums.openg....findpost&p=3078 http://forums.openg....findpost&p=3013 http://forums.openg....findpost&p=3001 http://forums.openg....findpost&p=3075 http://forums.openg.org/index.php?showtopic=1084&view=findpost&p=3011 http://forums.openg.org/index.php?showtopic=935&view=findpost&p=3003 I consider any nonsense post with non LabVIEW/NI /LAVA/ OpenG related links in the footer to be spam, even if the bot was smart enough to get some words from earlier posts and make it appear as if it is on topic. Usually a single post user account with useless message also looks to me like spam. A few of these could be maybe left, although I'm sure they were just test messages by the bot creator to see if it works. "Hello dude, this is great info" with only this single post for that account is simply useless. For obvious cases it would be good to disable the account too. A fairly good indication for these posts is also that it is usually a name with a 2 digit number appended.
  13. Are you sure Windows is not blocking any network access for those applications? When first starting up you should have gotten a dialog asking you if the application is allowed to access the network. This dialog is easy to just click away! And the DLL running in the LabvIEW process will simply be treated as the LabVIEW process by the Windows firewall rules.
  14. The only incompatibility I'm aware of for VI Server communication between different LabVIEW versions was I believe between 5.1 and 6.0 and also only from 5.1 server to 6.0 client, because they added some extra features to the initial connection establishment message.
  15. It depends what you mean with performance. For me performance is mostly about speed and Deallocate Memory has only a negative effect on that if any at all. In most situations it does nothing nowadays. In earlier LabVIEW versions it was supposed to do some memory compacting but that had mostly bad slow downs as a result and helped little in squeezing out more memory from a machine. I believe Ben's statement that nowadays it will only affect claimed data chunks from VIs that have gone idle is correct.
  16. And the OpenG ZLIB library supports extracting files directly into memory strings rather than to a disk file!
  17. Simple: Disable the "Allow user to Close" setting in the VI settings dialog. More Involved: Add the VI->Panel Close? filter event to your event structure and pass TRUE to discard, but terminate your event handling loop anyhow to return to the caller.
  18. A full tank of gas will not easily explode, although you sure can end up roasted in a big fire. It's the almost empty tank of gas that will explode in a very nasty way!
  19. You can't to that! The error cluster of the CLFN is for reporting runtime errors from LabVIEW in trying to load the DLL and execute the function, including possible exceptions when you have configured the CLFN to use high error reporting. If you want to pass error information from your function to the LabVIEW diagram you have to do it through a function parameter or function return value. I have done both. Method1: When an error code is returned by all functions I have a common error handler VI that is placed after each CLFN and converts this error code into a LabVIEW error cluster. Method2: Or you can pass the Error Cluster as extra parameter: #pragma pack(1)typedef struct { LVBoolean status; int32 code; LStrHandle message;} ErrorCluster;#pragma pack()static MgErr FillErrorCluster(MgErr err, char *message, ErrorCluster *error){ if (err) { int32 len = StrLen(message); error->status = LVBooleanTrue; error->code = err; err = NumericArrayResize(uB, 1, (UHandle*)&(error->message), len); if (!err) { MoveBlock(message, LStrBuf(*error->message), len); LStrLen(*error->message) = len; } } return err;}MgErr MyFunction(......, ErrorCluster *error){ MgErr err = error->code; if (!error->status) { err = CallAPIFunction(.....); FillErrorCluster(err, "Error here!!!", error); } return err;} I then use this error cluster to feed through the VI, not the error cluster from the CLFN itself. The CLFN error is useful during development and debugging to see possible errors and understand why it wouldn't work, but for most use cases, once the CLFN call has been tested and debugged, the node should not return any runtime error anymore.The function call however might, otherwise the whole exercise to pass the error cluster as parameter is quite senseless .
  20. The underlaying getaddrinfo() at least on Windows, will return all local network adapter addresses and LabVIEW will pick the first one from that list to use. So it binds to the actual default IP address of your machine. When you disconnect the cable this IP address (and adapter) gets invalid and your connection is working through an unconnected socket, which of course gives errors. When the cable is not connected at the time the address is resolved and the socket is bound, the first adapter returned will be a different one, it could be the loopback interface or your WIFI interface. So connecting the cable and disconnecting it does not have any influence on this connection. Using 127.0.0.1 or localhost explicitedly will bind to the loopback adapter and that one is valid for as long as Winsock is working, aka. until you shutdown your computer.
  21. As always with new versions, during the NI week conference in the first or second week of August. Expect to be able to download it during or after NI week and to see MSP shipments of disks a few weeks later. I didn't install 2010 and waited for SP1 but can't say that it seems slower or less stable than 2009 or 8.6, except that the installation took almost forever, (like 12 hours and more for the developer suite installation and another evening/night for the device drivers and then another evening/night for the SP1 update). I account it largely to the three year old Windows installation that holds among other things all versions of LabVIEW since 5.1.
  22. This is a hack, since different language versions of Windows will probably call this differently. You Americans need to realize that the world doesn't only exist of English speaking people . And of all people I would have thought you Michael should know. A much better way would be something along these lines. Tested only on 32 Bit Windows XP for the moment but in theory it should work right on all versions of Windows XP and newer. LV2010 WINAPI Is 64 Bit OS.vi
  23. On my computer it took more like 8 hours for the LabVIEW 2010 installation (full developer suite installation with most Toolkits included), another 6 hours for the device driver installation and then again 6 hours for the SP1 installation and after that another 4 hours or so for the LabVIEW Embedded for ARM Evaluation This is on a "very old" Dual Core 2.2GHz Notebook with Win XP SP3 but has just about any LabVIEW version and accompagning tools installed since LabVIEW 6.0 and quite a bit of other software so it may be that the MSI database overload is quite a lot of the problem for this. My new upcoming computer should be a bit faster and I plan to use separate VM installations for the different older LabVIEW versions, and depending on the speed maybe for all. On the current machine with VMWare the performance is quite bad for running any LabVIEW installation inside a VM.
  24. Why bother now with it? When I started with it it was for LabVIEW 5.1 or so. No option there. Then we wanted to keep it workable in 6.0 and later 6.1 at earliest. No Conditional Compile structure and even in the versions where it is available it has some issues breaking a VI if something in the disabled cases can't be compiled. And the conditional disable structure wouldn't take care of everything anyhow. First you don't want to write different code in the VI for each platform. It's much easier to keep that in one C file than in many VIs. And since there are usually always some things that are easier to translate in C into LabVIEW friendly datatypes than trying to make some pointer voodoo in the diagram, that C file is already there anyhow. Second there are platform specific distribution issues anyhow such as the need to archive the MacOS shared library resource, since you otherwise loose the resource fork when building the package on a non Mac computer and that makes the shared library useless (it looses most of the information about how to load the library). Also why install shared libraries into a system that are useless on that system? Just install whatever is necessary and leave the rest away. I'm not so fond of Windows 7 because they hid lots of the more advanced settings quite effectively. Will be some time to find them.
  25. Even in LabVIEW 2010 with private properties enabled there appears to exist no such possibility.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.