Jump to content

Mark Smith

Members
  • Posts

    330
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by Mark Smith

  1. QUOTE (jdunham @ Mar 4 2009, 10:46 AM) Really? I've never tried and couldn't see where a domain name server would be invoked - my bad
  2. QUOTE (LVBeginner @ Mar 4 2009, 09:47 AM) TCP Open Connection won't do a domain name lookup for you and resolve the address - put the IP address of the server in the address box and try again
  3. QUOTE (Cat @ Mar 3 2009, 08:08 AM) One way to get more info about the conditions when the program crashes would be to run a network protocol analyzer in the background (Wireshark is a good choice) to log the TCP/IP traffic. Then, when the program goes haywire, you have a record of what was happening to the network at least. If this log doesn't show any weirdness, then you can start looking at other parts of the program. This should be relatively easy to do (download Wireshark http://www.wireshark.org/ ) and install it and set up message logging on the stream in question. Mark
  4. I think this is exactly what you should expect. LabVIEW won't have any knowledge of which .NET assemblies get called unless LabVIEW specifically links to them with an Invoke/Property/Constructor node. So the LabVIEW builder won't automatically include any of the .NET assemblies it doesn't know about. If the referenced assemblies are part of the .NET runtime (like anything in mscorlib namespace) it's in the GAC (global assembly cache) and available to all other .NET assemblies without any other effort on the developer's part. If the assemblies referenced are developer defined (not part of the .NET framework), then the developer has to specifically get them on the target machine. You could put them in the GAC (but they must be strongly signed), but the easiest way is just to include them with your app by going to the Source Files pane on the Build Specification for your DLL and add the assemblies you need to Always Included. Then, all of the assemblies you need will get copied to your Support Directory - the path is shown on the Destinations page of the Build Specification - which defaults to the Data folder unless you change it. Without a good reason to change it, you should just leave it as is since the LV exe will always look first in the Data folder for support files. So the search path is LV exe -> Data Folder -> top level .NET assembly -> Looks for other .NET assemblies in 1) Data folder (.NET always looks in the same folder as the root assembly) -> GAC -> custom destination(s) - set by the calling assembly' s manifest? I not sure I recall this correctly. Mark Edit: Another possible course is to create a top level assembly in the .NET environment and build it as a DLL and then call that from LabVIEW. Visual Studio will detect dependencies and allow you to build a DLL and installer that includes all of the dependencies of the .NET DLL.
  5. QUOTE (arif215 @ Feb 23 2009, 12:21 PM) I'm not sure what your actual application is but if I had a C++ app already written and I wanted to use the NI display widgets (graphs, buttons, etc like in LabVIEW) I would use Measurement Studio. If, on the other hand, your C++ code exposes a DLL interface then it might make sense to use LabVIEW and Call Library Nodes to build a UI around it. Mark Edit: add links http://zone.ni.com/devzone/cda/tut/p/id/2719 http://zone.ni.com/devzone/cda/tut/p/id/4807
  6. QUOTE (neBulus @ Feb 20 2009, 08:47 AM) Ben, You may have the answer - IIRC (from my fuzzy recollection of RTOS class), you can starve a thread/process/whatever so that it never executes if higher priority tasks are still requesting CPU cycles. So, of the some of the code is written so that it always operates at a lower priority adding the zero msec wait ("Wiring a value of 0 to the milliseconds to wait input forces the current thread to yield control of the CPU" - LV Help) should make sure it executes. So maybe when the close connection is called asynchronously it never really gets executed because it never gets a high enough priority but when called serially it's not competing for cycles and does get executed correctly and the port gets released for re-use. This is all complete speculation, so TIFWIIW. Mark
  7. I passed my CLD Lucky for me, Michael didn't get to grade my exam - or maybe his advice kept me from making silly mistakes! Thanks to all for the feedback and help - LAVA is a great community. Mark
  8. QUOTE (Mark Yedinak @ Feb 5 2009, 12:13 PM) I think the reason instrument manufacturers do this is cost and performance. It's really inexpensive compared to GPIB (I just did a quick google and found a network interface chip for <$5 and a GPIB chip for around $30) and all personal computers come equipped with a NIC (a GPIB Interface Card could set you back a few hundred dollars). Also, 20 feet of Cat 5 cable w/connectors is at most a $10 item and a 20 foot GPIB is more like $100. So this has become the "new serial" port for some vendors (of course, USB is really the "new serial" and some vendors use that). So these aren't really "network" devices in the more conventional sense (someone from across the country connecting to my Tek Scope is kind of pointless) , they're just devices leveraging cheap communication technology. Mark
  9. QUOTE (Mark Yedinak @ Feb 5 2009, 11:45 AM) Mark, I'm gonna guess that you and I are working from the same paradigm where we see TCP/IP and client and we start thinking about servers and distributed apps and networks. But I think that this problem is just one of local instrument control where a Ethernet and a NIC can replace a GPIB cable and card. So this is like having to address four instruments that all have the same GPIB primary address that can't be changed (I know that doesn't make any sense, but then again I can't imagine an Ethernet-enabled instrument where the IP address can't be changed, but that's apparently the case) by addressing four separate GPIB controllers so the VISA string might look like GPIB0::2 GPIB1::2 and so on. But I'm extrapolating here, since we haven't heard back from jbhee on this! Mark Edit: also what Phillip said (he got that in before I posted)
  10. QUOTE (OlivierL @ Feb 5 2009, 10:45 AM) That is very interesting! I've never had occasion to use TCP/IP for instrument communication, so that one went right past me. Do you know of any examples using VISA that implement a client/server? Not that I have a real need, I'm just curious. Thanks, Mark
  11. QUOTE (Ton @ Feb 5 2009, 07:50 AM) I was thinking more if your event datatype changed and you did not update the control/constant wired to the "Type" input you could get some really interesting run-time results! Mark
  12. QUOTE (nicolasB @ Feb 5 2009, 06:14 AM) It appears the event name in the event handler structure is going to be the same as the name of the control that it references. So, the only way I see to do this is to get the event in the called VI and load it into a control with a new name and then use that control to register the event. I don't really like this (it requires loading the control with a local variable or property node since just passing the data thru the control won't affect it) but it does work - see attached VI. Maybe someone else has a better idea. Mark http://lavag.org/old_files/post-1322-1233843289.vi'>Download File:post-1322-1233843289.vi OK, the typecast looks interesting but I think it would require care to make sure you don't get a run-time failure from an improper cast
  13. QUOTE (jbhee @ Feb 4 2009, 10:11 AM) The short answer is "you cannot" - LabVIEW does not expose an interface to allow selection of a specific interface for outbound TCP/IP packets. And that's because the OS does the selection. Here's a little more info http://digital.ni.com/public.nsf/websearch...5E?OpenDocument http://support.microsoft.com/kb/175396 It appears you may be able to do some command line programming to set temporary routes http://www.tech-recipes.com/rx/478/nt2000x...-routing-table/ and set the route "if" parameter to force a specific interface
  14. QUOTE (ejensen @ Feb 4 2009, 10:11 AM) I think you explained pretty well what happened - the app builder tries to write to some path (llb?) that already exists. The app builder fails, then cleans up after itself (I don't think it wants to leave any half-written files laying around). The next time, that path doesn't already exist so the app builder succeeds. I don't know why it won't overwrite or any of those details, but it sure sounds like the builder would succeed the first time if you clean out the destination folder first. Mark
  15. QUOTE (nitulandia @ Jan 29 2009, 08:25 AM) It's a long shot, but I found this instruction in the LabVIEW 8.6 help "If you reference a .NET object from a VI that does belong to a LabVIEW project, the CLR considers the project to be the running executable. The CLR therefore searches for private assemblies in the project directory" So if all your DLL's aren't in the same directory (and at the same level - not in subdirectories) as your lvproj file, try putting them there. Seems I'm rapidly running out of ideas! Mark
  16. Thanks for the explanation - Paul obviously understood the problem better than I did. Good luck! Mark
  17. QUOTE (nitulandia @ Jan 28 2009, 04:55 PM) OK - so if you put the DLL in the requested path, it works the next time you call the VI - but will it fail again later looking for the same DLL but now in a different path? That's what I think I read. And that is really pretty strange - sounds like the .NET assembly (DLL) may be buggy. Or, does it just take it a while to fail again? Since you said the problem is random, maybe it just works for a while because it doesn't try to execute that method? That is possible because a method invoked by reflection won't cause any problem until it is actually called at runtime - it's not like a DLL that loads when the calling application loads so if it doesn't get called the app could run for days if that particular method doesn't get called. Anyway, there's nothing your LabVIEW code is doing to cause this - it's all happening in the .NET assembly so it may be up to the vendor to fix it. But here's a link to how .NET resolves paths to private assemblies http://www.ondotnet.com/pub/a/dotnet/2003/...dingpolicy.html The easiest way to deploy a .NET DLL is to include it in the same directory as the exe since that's always the first place .NET looks for it. So, if they aren't already there, try putting all the DLL's in the same directory. That should force the calling DLL to at least look for any dependencies there. Mark
  18. I guess I'm not quite sure what the product you want to deploy actually is. If you need communication between two LabVIEW based programs running in separate execution threads, then what Paul has suggested makes a lot of sense. I had interpreted the need as more of an application with an API for LabVIEW developers. To do this and still protect your IP, you could deploy your project with a set of VIs that have either password protection or no block diagrams. Then the LabVIEW developers that want to use your application would just add those VIs to their block diagrams and that would be that. To run your app stand alone, you would just need to include an exe that called whatever - it could call into your library of VIs or it could just be self-contained. But it seems a little convoluted to deploy an application written in LabVIEW for LabVIEW users and then not provide the ability to load VIs that link to that app into a block diagram. But it's altogether possible I don't understand the problem Mark
  19. QUOTE (nitulandia @ Jan 28 2009, 01:59 PM) Thanks for the info - this probably isn't what I first suspected, so I'll ask more questions. You describe the error as a LabVIEW error and as an exception - is it indeed an exception thrown by the .NET assembly? Or is it an error generated by the LabVIEW call? Can you attach the error message and error code? Thanks, Mark
  20. QUOTE (nitulandia @ Jan 28 2009, 10:31 AM) A couple of questions - do these errors happen while the VI's are executing? Or do they happen after a VI that calls a .NET DLL is closed and then re-opened? I'm guessing the latter but more info would help me understand Thanks, Mark
  21. A couple of ideas 1) Build the core program as an actual DLL (using the option in the LabVIEW Build Specifications in the Project Explorer to create a Shared Library). This exposes whatever API you have into the core program as conventional C/C++ functions. It's a good way to create core app that could be called from different languages/development environments. Calling it from LabVIEW then requires using a CLN. 2) If you're only going to run the core app standalone or controlled from LabVIEW, then you could deploy the app as a library (folder or llb file) and define some VIs as the API. You could then have a VI that serves as a caller for the core app built as an exe that would run standalone or you could call into your library of VIs (your API) if you want to build a specific UI. Mark
  22. QUOTE (Mourad @ Jan 22 2009, 08:36 AM) These errors sound like communication errors caused by misconfigured serial port settings - if the start and stop bits aren't correct the data frame will be wrong. If you're not using the latest driver from NI (the plug and play one) I would get it because it seems to have the most complete and automated handling of all the possible serial port settings for the host. For instance, it tries to handle whether or not flow control is supported automatically. Also, your cable may still be incorrect - have you tested pin-to-pin to make sure it matches the diagrams in the user manual? Mark
  23. QUOTE (Mourad @ Jan 22 2009, 02:19 AM) That error is a VISA timeout, so it means your instrument is not responding (and probably not getting your command string, either). If MAX indicates that your COM port (ASRL1::INSTR as the resource name) is working properly (and that is the one you are using) and you have read the HP34401A manual and you are sure it is configured properly for serial communication then I would next suspect the cable. From the Agilent 34401A Manual "Connection to a Computer or Terminal To connect the multimeter to a computer or terminal, you must have the proper interface cable. Most computers and terminals are DTE (Data Terminal Equipment) devices. Since the multimeter is also a DTE device, you must use a DTE-to-DTE interface cable. These cables are also called null-modem, modem-eliminator, or crossover cables. The interface cable must also have the proper connector on each end " Mark
  24. QUOTE (Mark Yedinak @ Jan 21 2009, 09:21 AM) OK - so this is the ONC RPC as specified in IETF RFC 1831 (that's a pretty ancient standard in computing years)? I guess I'm still a little confused because to the best of my knowledge there is no "basic" RPC. RPC (Remote Procedure Call) has been implemented a thousand different ways - SOAP, CORBA, Windows Remoting, etc, and in the grand tradition of computing none of them are compatible with any of the others! So, I don't really have anything else to offer except encouragement! Good Luck! Mark
  25. QUOTE (Mark Yedinak @ Jan 21 2009, 08:30 AM) Mark, I'm making the assumption that you're talking about the XML-RPC server project in the CR? If so, that does include an example of creating a LabVIEW call into an XML-RPC server. You are correct that this project mostly supports the server side, but it includes all of the stuff you need to pack XML-RPC into a request for any XML-RPC server and parse the response (see the Call Generate Sine Wave.vi for an example). But, you first just call it RPC, so I'm not clear exactly which RPC implementation and protocol you mean - there are lots of them and the XML-RPC project only supports one! Mark
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.