Jump to content

Mark Smith

Members
  • Posts

    330
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by Mark Smith

  1. QUOTE (Aristos Queue @ Apr 13 2009, 11:24 PM) You might want to have a look at the XML-RPC protocol for serialization of data to export from LabVIEW and import into Java. Compared to any other XML protocol (like SOAP) I've explored, it's very simple yet very flexible. The XML-RPC Server project in the CR has tools to pack LabVIEW data types (the ones XML-RPC supports) into XML-RPC compliant XML and then into an XML-RPC method call and Java has several implementations to choose from (http://www.xmlrpc.com/directory/1568/implementations) to unpack the data and execute the requested method. Here, your method call just sends data to the Java client (the Java client could just read the serialized call from disk if you choose to write it there) and the client side method unpacks the data fields and builds a new object from that. The advantage here is that you can use existing tools to serialize the data for transport from LabVIEW to Java and you won't have to roll your own - instead, you can concentrate on how to get the object data to build the XML-RPC call from LabVIEW and the Java code to re-build that object as the method. Mark
  2. QUOTE (Ic3Knight @ Apr 12 2009, 07:49 AM) I thought I should definitely take a look at your sample code since when I posted my sample exam not long ago I got some very valuable help from members of this forum (and I subsequently managed to pass my CLD!). So here's what I see 1) As Mark Y said, the CLD guide says don't use property nodes to update controls - I have a million arguments why this makes no difference in the situation where you used it - the performance penalty is completely irrelevant when you just use them to initialize controls, but section 4.a.4 of the CLD guide says don't and the NI guys get to grade the exam, not me. 2) Once again, as Mark Y said, don't use default tunnel values (Section 4.c.7) - wire the values explicitly 3) In the program logic, nothing initializes the "Simulation_Switches_FGlob.vi" so if you stop the VI and then restart it w/o it leaving memory these values don't get updated and the FP control says the Car is Out of Position but the carwash runs anyway - but See # 1 above - if you had used a Value (Signaling) property node here to update the Simulation Switches, it would have forced execution of the Simulation Switches Value Changed event and updated the functional global to macth the FP control. I would consider this a valid use of a property node, but I don't know how the exam graders would see it. The safe way would be to call the "Simulation_Switches_FGlob.vi" in line at startup like you call the TIMER_FGlob.vi. 3) Error Control - In the Cycle Control.vi, for example, the VI runs the same on error in "True" or "False" (and updates the Functional Globals, since they run the same on error in true or false). I think it is considered better to wrap a VI like this in an overall error case and don't evaluate or update the VIs inside on error - that would conform to the LabVIEW style guide. 4) Lastly, when I opened the project, I had to re-link the VIs all manually - this shouldn't be necessary on a project like this except that it was looking in \user.lib in my LabVIEW source folder - if you saved the project to user.lib in your LabVIEW source folder, I would suggest not doing this. If not, then I don't know why I had the linking problem. But, after all these criticisms, I thought the code was well written, documented well enough, and easy to follow. It uses a queued event driven UI handler to feed a queued state machine, so all of this is good. The timer function is logical and de-coupled (nothing ever blocks and the UI is always responsive). Lastly, I'll comment on What BenD said - I did find the exam I took to be rather lengthy. The logic wasn't really any more complicated than the CW example, but there was a lot of it! So, the take away is that you need to be able to do the state machine/queued state machine/event driven UI handler in your sleep so you can concentrate on the specifics of the implementation at hand. But if you did this CW example in the alloted 4 hours, I think you're in pretty good shape. Good Luck, Mark
  3. Sounds like you're already on top of it! And I agree, maybe the ini setting is the right way to go since while it may be a PITA to change inputs that you might sometimes want to default, most of the time it's safer to just make them all required and then handle the exceptions to that rule. Thanks, Mark
  4. Maybe this has been discussed elsewhere but I can't find it. When I use the right click menu to make a data member access VI for my LVOOP class, if it's a write VI, the connection for the data member is recommended but not required. Seems to me there's no other reason for a data access write VI except to update that class member so there's no reason to ever call this VI and leave the data input unwired (like I mistakenly did and then spent half an hour finding my mistake). Is this correct? Am I mssing a setting (I know about the ini file option to make all new inputs required, but that seems like the nuclear option) for VI creation in a class? And if not, then should this behavior be changed to make inputs on new data member access VIs required? Mark
  5. QUOTE (Mark Yedinak @ Apr 7 2009, 12:46 PM) Here's my two cents - at NI developer days last month, our local rep (who's a sharp guy) was giving the "Prepare for the CLD" talk. He said never use property nodes to update controls on the CLD - use locals - while the viewgraph behind him said "Use Property Nodes to initialize Controls" - so it would appear that NI can't agree among themselves He then showed his example where locals update a front panel control much faster than using property nodes. At this point, I commented that the performance difference is beside the point, because if you're using either to update a control in a fast loop, you're doing it wrong. You should only need to update a control to initialize it or show some other change of state that a user would need or want to know before using that control for input. If you need to update the display in a fast loop (faster than the user can possibly grab and use a control) it should be an indicator anyway. Mark
  6. QUOTE (TobyD @ Apr 7 2009, 11:24 AM) It's numeric roundoff due to floating point imprecision As type double, 0.08 entered becomes 0.0800000000000000017, increment up, you get 0.160000000000000003. Increment up again you get 0.239999999999999991 - now, when you increment down you get 0.159999999999999976 and you can't step down another 0.08 and stilll be above the allowed minimum Mark
  7. QUOTE (Michael Malak @ Apr 7 2009, 10:22 AM) This is the one place where using local variables (or property nodes) is considered good LabVIEW style - to write values to a control, right click the control on the BD and select Create->Local Variable - now, whatever you wire to the local will get written to the control Mark [Edit] Just one more thing - you can save a lot of time doing this sort of thing if you group controls into a cluster and then use some tool that automates persisting that cluster to file. Then you can do one simple write to save the initial values and one read to repopulate the cluster - or, if you need to separate the controls on the UI, you can then unbundle the cluster on read. There are several approaches that essentially automate the write and read from file. I've attached an example that uses the LabVIEW Schema XML Tools, but if you would prefer to save the config data in ini style files, a couple of good options are the OpenG Config File Tools http://wiki.openg.org/Main_Page and David Moore's Read/Write anything VI's http://www.mooregoodideas.com/File/index.html Mark Download File:post-1322-1239123211.zip
  8. QUOTE (NeilA @ Apr 3 2009, 12:11 PM) The internecine avoider (I had to look up as well) is from the \vi.lib\Utility\tcp.llb and is part of LabVIEW - as far as I can tell it just prevents two services from listening on the same port since then you wouldn't have any control over which service actually got the connection. There's some other interesting stuff in that llb, as well. And if you get something useful, please post it! I'm always learning from the code available here on LAVA. Mark
  9. QUOTE (NeilA @ Apr 3 2009, 03:33 AM) It seems you want a single purpose server (just to handle SOAP requests) so in many ways this would look a lot like the XML-RPC server I built except the protocol would be a bit different. And it seems you have a way to unwrap the incoming requests into LabVIEW calls and then wrap the responses into SOAP as well - so I would think you're better than halfway there - maybe three-quarters. QUOTE (NeilA @ Apr 3 2009, 03:33 AM) The problem I had was with the incoming TCP connection, but after looking at your server I see that you use a producer consumer pattern to dequeue TCP references. Is it possible to explain a bit further what you are doing in the server vi? Is there any particular reason that you use the set server vi (I have not seen this before)? Initialize: The server VI creates a queue for connections and opens a listener with the Wait on Listener - whenever a new connection is accepted, it gets dequeued in the "Dequeue" case. The initialize also creates queues for the message request and the message response and establishes a path for the Fault Log and hides the front panel. Queuing up the connection requests means that the server can service those requests even if they come faster than the methods requested can respond. Transitions to Dequeue or Error Dequeue: In this case, a connection gets dequeued and the TCP Read All reads the data from that connection. Inside the TCP Read All the logic is to read enough of the HTTP header to make sure you get the "Content-Length" field and use that value to make sure you've retrieved the entire message (or timed out or gotten an indication the client has closed the connection) before you stop reading. Now you should have a complete XML-RPC method invocation message in HTTP - in your case, you should have a complete SOAP invocation. Now, the message gets passed to the XML-RPC Request Parser that checks that it is a valid Method Call and if it is it parses out the method name (in this implementation, the method name is the VI that will get called) and the parameters (if any) that get passed to the VI. Here's where you would parse your SOAP invocation into the VI that will service the request and the parameters to pass. Transitions to Remote Procedure Call (if the message is a Method Request) or Server State or Exit (if the message is from the Set Server VI) or Error Remote Procedure Call: In this case, as documented on the diagram, we use the Method Name to identify the VI to call, enqueue the parameters onto the methodCall queue, and launch (asynchronously) the method VI. If we get an error calling the method, flush the methodCall queue and push a fault on the methodResponseQueue - if no error was encountered, the called VI will push a response on the methodResponse queue. The MethodResponse will have a connection ID the identifies which connection made the request and expects the response. When the MethodResponse gets put on the queue it's already a fully formatted XML-RPC Response. The called VI is responsible for wrapping the response in the XML-RPC Response format. Transitions to Dequeue (to wait for the next request) or Error The Server State case just responds to commands form the Set Server VI, the Error case converts errors to XML-RPC faults (reported to the caller) and logs faults to file, and the Exit case just cleans up on the way out. The Set Server State VI exists because the Server runs "headless" - no front panel. If you want to open the server front panel or stop it or change the debug level, you need this UI component - or you could pass it the appropriate XML-RPC method call from any client. But you can start the server directly and never use the Set Server State if desired. Hope this helps! Mark
  10. QUOTE (NeilA @ Apr 2 2009, 12:00 PM) Neil, As the developer of the XML-RPC LabVIEW server, I thought I'd throw in my two cents worth. First, I am not any kind of expert on web services, either SOAP or the RESTful protocol the LV 8.6 Web Server uses. I developed the XML-RPC server to support a project where the intention was to deploy LVRT targets (cRIO chassis) and then be able to control and monitor the distributed systems from any platform (the orginal spec was a Java app running under Linux as the system manager). I kept the implementation pure G with as little bells and whistles as I could so I could be confident I could deploy the project as an executable under LV as old as 7.1 and to any target that supports LabVIEW. The XML-RPC protocol was chosen because most languages and platforms have XML-RPC toolkits (Java, .NET, python, perl, etc) and because the protocol is very simple and lightweight - the whole spec can be printed on seven pages (at 10 point font)! This made it practical to roll my own server including the protocol parsing and packaging. I'm not so sure the same can be said of SOAP. I briefly looked at the spec and it seems like it might be an order of magnitude more difficult to create the necessary parsers and message packaging tools. I also poked around a little and I had a hard time finding any useful information about SOAP parsers/packagers - most of the SOAP protocol stuff is wrapped in WSDL and then that is wrapped in a server-specific implementation (ASP, for example). If you can find a simple SOAP parser, that could simplify using the XML-RPC server as a framework for a SOAP server. But the SOAP protocol is still just XML so if one can do XML-RPC then one can do SOAP - it just will take longer. But it would just be a SOAP server - it seems that generally SOAP calls are made thru a web server. The web server must recognize the SOAP protocol and dispatch it correctly but also handle anything else that a web client might throw at it. My XML-RPC server just returns an error if it sees ANYTHING other than an XML-RPC call. But the up side is that the server architecture is straightforward and easy to follow, which can't be said of most other implementations. Mark
  11. QUOTE (Paul_at_Lowell @ Mar 24 2009, 11:03 AM) I don't think you can use Read from Spreadsheet File on an Excel file - you'll have to save the Excel file as a csv or some other text format first. If you want to read directly from an XLS file, you'll have to use the activeX interface. http://zone.ni.com/devzone/cda/epd/p/id/3409 Mark Edit: added link
  12. QUOTE (Cat @ Mar 23 2009, 11:33 AM) Probably doesn't make you feel any better, but I was at the local NI Developer Day last week and they demoed the execution trace toolkit. The guy running the demo (who's a pretty sharp LabVIEW guy from the NI home office) managed to crash it and had a hell of a time to get it to do what it was supposed to. He admitted he wasn't very familiar with the tool, but I did get the distinct impression it may not be ready for prime time. When it did work you could see how useful it can be when it does work. NI generally makes good stuff, but they may have jumped the gun on release of this product. Sorry I can't really help. Mark
  13. QUOTE (Cat @ Mar 20 2009, 09:06 AM) Closing the front panel won't stop a VI that's been loaded dynamically - you can close and then re-open the panel and the VI will still be running. Closing all refs will. As long as the VI is running some sort of blocking process (loop, wait to dequeue, something) and there's a ref around, it will continue to run. Sounds like your VI panel gets closed but it never actually stops except when all refs go dead, so you may want to look at how you command the VI to actually stop. Mark
  14. QUOTE (LVBeginner @ Mar 19 2009, 06:57 AM) I think typically the outgoing port will get assigned by the OS sockets layer for most implementations - it just selects an unused port. Run "netstat" from the Command Prompt and you can see what local ports are used for a connection. I don't know anything specific about WinHTTP that might be different. Mark
  15. QUOTE (bsvingen @ Mar 18 2009, 05:25 PM) I agree here - if they want to shut down the test is a more or less orderly matter, that's one thing, but a true abort suggests "things have gone to hell and we need to stop this thing ASAP" - like the abort button when the Enterprise has started its self-destruct sequence. They should be handled quite differently. In most of the systems I've worked on, we never included a software or computer controlled "abort" although we did include stop and interrupt functions/buttons. If the system needed a real abort, that's what the big red mushroom switch that disconnected the mains or whatever was for. Mark
  16. QUOTE (Cat @ Mar 19 2009, 05:27 AM) I don't think it makes much difference which language you use to learn OOP, but I would think C++ would be tougher than using a more strict OO language, mostly because you can write code in C++ all day and never have to use a OO paradigm. If you learn Java or C#, those languages force you into learning something about OO because there's just not any other practical way to do things. Mostly, it's about understanding the OO principles independent of the language you choose. I won't claim it's easy - it took me a long time to get my head around OOP and I learned it programming in C# before I ever started writing LVOOP. And I still have problems getting started on OOP projects - but in a sense that's a good thing because it makes one think carefully about the architecture before coding. If you don't design your LVOOP to take advantage of the inheritance hierarchy, then you miss one of the most powerful aspects of LVOOP and why I find OO programming in LabVIEW to now be worth the extra time and overhead. But back to the question - I don't think it's C++ you need to know - it's OOP principles and the language used to describe them. Mark Edit: added links http://zone.ni.com/devzone/cda/ph/p/id/7 http://java.sun.com/docs/books/tutorial/java/concepts/
  17. QUOTE (vugie @ Mar 19 2009, 05:38 AM) I think the chart referenced in the link above explains why more people don't use 7-zip - you get about a 24% reduction in file size compared to zip but you pay a performance penalty in time of over 300%. Also, I would expect zip is more widely supported - does the native windows zip know what to do with a 7-zip file? I don't know. But back to the original question - we need to know more about your data to offer any useful suggestions. For instance, if you're storing waveform info scanned from a 16-bit card then you can get full fidelity by storing a waveform header with scaling info and store each data point in two bytes. If you just dumped that data as doubles, you'll use four times as much space. Mark
  18. QUOTE (rolfk @ Mar 18 2009, 06:03 AM) Thanks - I thought I was missing something obvious BTW, I certainly appreciate all of your very knowledgeable advice on all the threads I follow! Mark
  19. QUOTE (rolfk @ Mar 18 2009, 02:20 AM) Rolf, Thanks for the information. Can you give a simple example of how one could create an address string that would get to a site like "www.google.com" through a proxy server called "myproxy.mysite.net"? Can you pass this composite address to the LabVIEW TCP Open Connection and expect it to work? Thanks, Mark
  20. QUOTE (jaehov @ Mar 17 2009, 12:56 PM) See this article to start http://en.wikipedia.org/wiki/Method_(compu...#Static_methods This also applies to static properties In LabVIEW, when you browse the pulldown list of methods or properties from the .NET Property or Invoke node, the static members are preceded by . You can use the static methods or properties without first instantiating an object. Mark
  21. QUOTE (jaehov @ Mar 17 2009, 12:24 PM) Nope, that's what you have to do. Unless you're trying to access static methods or properties, you have to instantiate the object with the constructor node before any object exists that has any methods or properties. Mark
  22. John, Here's an interesting article from the home team http://www.sandia.gov/LabNews/081219.html I think presently the best way to build a massively parallel test system is not by using a multi-core machine (at least not over four cores) but by using multiple machines that coordinate over some common interface (gigabit ethernet and TCP/IP?). This allows the architect access to more memory and bandwidth per thread (and typically per hardware device) than you can get from just adding more cores to the processor. Of course, it costs more Mark
  23. QUOTE (neBulus @ Mar 6 2009, 12:12 PM) Ben, If you get Process Explorer from http://technet.microsoft.com/en-us/sysinte...7c5a693683.aspx you can get more detailed info about just what is using the memory than from the task manager - it will show the actual memory consumed by a specific process and may lead to an answer quicker. Mark
  24. QUOTE (Mark Yedinak @ Mar 5 2009, 09:24 AM) I have done something similar but rather than use a variant I used a string (byte array) for the data and use the "flatten to string" to convert the data and also prepend the message type as a flattened string - this way, the message is platform/program agnostic. All the recipient needs to know is how LabVIEW represents the data. Could a variant work the same way? I don't know much about how LabVIEW packages variants. QUOTE (jdunham @ Mar 5 2009, 10:49 AM) At that point you could just keep the versioning for each specific file. But then you'd have a poor-man's implementation of Subversion [smell the irony?], so you should just convince your IT group to put the real Subversion on the server. In that case you'd just run a simple svn update before executing any test. When you do something like this w/Subversion, do you use the command line and build a batch file to execute or do you use an API? Thanks, Mark
  25. OK, so I was wrong the first time , but I think this may be why I've never had any luck with getting the host names to resolve correctly and it could be the problem here http://forums.ni.com/ni/board/message?boar...essage.id=90988 I can connect to my proxy server using it's host name but I can't connect to anything else because there's evidently no automatic (easy) way to use a proxy server with the LabVIEW TCP/IP tools I learn something everyday (at least I hope so) Mark Edit: Try looking at the configuration of your web browser (assuming it can reach the requested host) and see if it's using a proxy server
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.