Jump to content

Mark Smith

Members
  • Posts

    330
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by Mark Smith

  1. When did you take the exam? Was it after they changed the format? I heard that it used to be some essay questions and a couple hours coding but now it is all coding. From your comments about requirements tracking it sounds like you took the newer one. If so I would be very interested in your opinion of how closely the single CLA sample exam compared with the real exam. Does it more closely resemble the actual exam than the CLD practice exams vs the real ones?

    I took the exam early this year - I found that the actual exam was much like the sample exam. It took me about eight hours to really complete the practice exam correctly and I could have used eight hours to do the real exam if I had that option. As it was, I got just enough done to pass.

    Mark

  2. Thanks for the detailed review! I have read many of your posts and you have earned much respect.

    It could have definitely been better but with the four hour time crunch and all. It's just not enough time. It took four and a half hours to do and I should have saved a copy away at the four hour mark and posted that. But it would have gotten about one point for functionality.

    I used LVOOP for a couple of reasons. One was just for the sake of it because I want to get as much practice. Another is that it seems easier for me. You do end up with a lot more vis to document and make icons for. But documenting them is easier since they don't do much. I don't know if I will write code in a similar way for the CLD. It depends on what they throw at me.

    I built an LVOOP solution for my CLA exam - all I can say is that it does add a large overhead in terms of time and documentation. For the CLA, where you don't have to produce functional code, it worked out for me - but just barely! I got high marks for style/documentation and a good score for architecture development (I presume that's because I had a pretty good OO system architecture) but a low mark for requirements coverage (I just ran out of time to get to anywhere near all of them). If this had been my CLD I'm not sure I would have had enough functionality to pass. But then again, I'm so used to doing things in LVOOP now I might struggle to create any project that's not LVOOP.

    Mark

  3. I see a new future for an "Obfuscated LabVIEW" contest! This feature (of course with misleading documentation) could send a code maintainer on a round-robin chase that could last for weeks if not months :rolleyes:

    I think I understand the discussion - Here' what I think I'm reading

    1) the event is registered (associated with a structure)

    2) it is never handled explicitly in that structure

    3)The event fires (somewhere)

    4)The event case with no explicit handling implicitly handles the event (which resets the timeout timer)

    5) as long as that event keeps getting fired faster than the timeout value, the timeout never executes even though it appears the event structure is sleeping and not handling any events

    If this is correct, I have to come down on the side of allowing the user to require the event structure to explicitly handle each event (I think that's what crelf proposed, if I'm following all this) just like removing the default value from case structures requires the user to explicitly implement all cases. I have to have a good, well considered reason to include a default case and I would want a good, well considered reason to have an event case that doesn't explicitly handle an event that I register for.

    Mark

    • Like 1
  4. OK, so it sounds like the only way to be sure of your compatibility for future labview version is to not flatten data to string. NI will just change it at some point and it will affect all past projects. I guess a human readable, proprietary transfer mechanism will the preferred system for us.

    I'm not sure this is the right take-away message. How LabVIEW flattens data to string (serializes) is up to NI to decide but they've done a good job providing documentation and backward compatibility. And if you're going to use TCP/IP, you have to serialize (flatten) the data at some point since the TCP/IP payload has to be a flattened string (byte array) anyway. I've got code going back to LabVIEW 7.1 that I use the flatten to string functions with and it hasn't broken yet (LabVIEW 2010) and I don't expect it to in any major way. The flattened string (serialized data) is used by way too many people in way too many applications (like yours, possibly!) for NI to risk arbitrary and non-backward compatible changes.

    Mark

  5. In the context of a large application that will be maintained/added on to for years, what is the best choice for data communication?

    For the purposes of this discussion, we are talking about using TCP to communicate between two LV applications. Each application will be updated separately. The server side will ALWAYS be up to date with the latest transmission/receiving types, the clients may be out of date at any given time, but newer transmission types will not be used on legacy clients.

    Would it be better to:

    Use a messaging system relying on a typedef'ed enum with variant data. (Flattened String to Variant, Variant to flattened string used for message formation/conversion.) each message has an associated typdef for variant conversion.

    OR

    Use a messaging system relying on a typedef'ed enum with a human readable string following the bin data of the enum. Each message has its own typedef and a string formation VI as well as a custom string parser to data.

    Additionally, LVCLASSES cannot be used, so don't go there.

    Would love to hear some takes including your perceived benefits/drawbacks to each system.

    I always expect I'm missing something, but why do you need the variant? If you've defined a header that includes message length and message type, that works to provide enough info to unflatten the message at the receiver if you just flatten whatever. And don't variants limit you to non-RT targets? If you use flattened data I don't think you have that restriction.

    Second, if you decide to use a human-readable/cross platform protocol, check out the XML-RPC server in the Code Repository - that's a published standard (although an old one, it still gets used) that defines message packing/unpacking in XML formatted texts and has a protocol for procedure invocation and response. It's pretty lightweight but still applicable for many tasks. And clients can be language/platform independent. But any of these human readable schemes are less efficient than byte streams. For instance, to move an I32, you need something like

    <params>      <param>     	<value><i4>41</i4></value>     	</param>      </params>

    that's a lot of data for a 4 byte value! But it is easy to understand and debug. And if you need to move arbitrary chunks of data, the protocol supports base64 encoding of binary.

    Mark

  6. What I take away from this thread is that we have a lot of really good architects and developers that participate on LAVA and understand the importance of having some sort of error handling strategy. It hardly matters how you do it as long as you do it and do it consistently. I use the error structure (mostly because the method templates on classes add them) but the important thing is that it makes me think about how I need to handle errors that might be generated. Should I catch and discard? Should I just make the VI a no-op and pass on the error cluster? Do I use a dialog (seldom do I find this to be a good idea but every once in while it's the right answer)? As long as developers are thinking about error handling then their code will be better.

    Mark

  7. I'm looking for suggestions on how to best handle UI events (stop, start logging button, possible menu options, etc.) within the the following design:

    producer loop: serial comm with proprietary control system

    producer loop: serial comm with NI analog related hardware

    producer loop: serial comm with heise pressure transducer

    consumer loop: parse data/strings/responses from producer loops, updates UI (w/ parsed data, graph, DI/DO LEDs, etc.)

    Perhaps another loop (events) to monitor UI events?

    another suggestion?

    Thx for your time and effort,

    Anthony

    I think you're on the right track - all the producers (in their own loops) enqueue data and the consumer dequeues data from all in another loop and processes it. If the processing/saving/etc bogs down you have some buffering and don't lose data.The consumer loop should now publish the processed data on user events. Now, have another loop with an event structure that both responds to UI events (button clicks, text entry, etc.) and also registers for the user events generated by the consumer loop. The UI will update as quickly as the consumer supplies data on the user events and will also be responsive to user input.

    Mark

  8. When you serialize objects, the version number of the class gets written into a field (at least when using XML serialization - I presume the same is true of binary) so you could check what version of the class loaded and take appropriate action rather than look at the actual object data. I use the native LabVIEW XML serialization and it works quite well - of course, I'm not serializing objects that are 100-500 MB!

    Mark

  9. Mark,

    Thank you for your reply. I think I wasn't quite clear here. My PC is the device, not the controller. Being the actual device I want to concentrate on device functionality and not on learning any more communication protocols than I have to. I know I will have to implement a SCPI parser at a minimum. With that said, my questions really are:

    1) Do I need to implement anything other than SCPI over (TCP/IP) ethernet or USB to be a viable product in the marketplace?

    2) Could I just use a GPIB -to-Ethernet or GPIB-to-USB adapter to connect my device to an existing cluster of devices that are connected via GPIB?

    3) Another question: Are you aware of an availableJava language SCPI parser?

    I'm just now trying to get up to speed on instrumentation products / protocols, etc. , so if my questions aren't precise forgive me.

    Scott

    You are correct - I missed the point of your question :)

    1) I have no idea - I'm not a marketing guru, although I do see many instruments these days that ship w/USB/ethernet and no RS232/GPIB

    2) I have no personal experience, but I think it will require you to do some specific programming to interface with the GPIB driver. Just as you'll have to implement some sort of listener/server for TCP/IP and some sort of serial port listener, you'll need implement some sort of GPIB listener. You may be able to do all of this with VISA, but I've never tried . If you can do it with VISA much of the work will already be done.

    3) Don't know of a Java SCPI parser - SCPI shouldn't be that hard to parse and if your command set isn't too extensive you could roll your own. I expect that SCPI parsers are most often embedded in the instrument's firmware so that may be why there aren't many for PC platforms.

    Here's some links that may help

    http://www.ivifoundation.org/specifications/default.aspx

    http://digital.ni.com/public.nsf/allkb/29AA51AAE9ED716786256DAA0035EEF8

    http://digital.ni.com/public.nsf/allkb/9CC0939663F1C5DE862565D70082E89E?OpenDocument

    http://digital.ni.com/public.nsf/websearch/D8B48FE4263E754C862566F800791B2E

    Mark

  10. Hi,

    I'm posting this question here because I don't know where else to post it.

    I want to implement a new device on my PC to take some measurements. Remote control will be via SCPI so it can play in connected environments. From my research it appears to me that I can just send SCPI text out over the USB comm port or Ethernet. If the device needed to play in a GPIB connected environment then all I would have to do is purchase either a USB - to - GPIB, or Ethernet -to- GPIB connector. Is this a correct assumption on my part? I want to make sure there is not another layer of software that needs to be implemented.

    -Scott

    As long as you are using VISA as the abstraction layer, this should work. For an instrument that accepts SCPI commands and supports TCP/IP over ethernet, serial, USB, or GPIB, it should be almost as simple as setting the VISA resource name to the correct communication protocol when you add the GPIB interface. For example, the TCP/IP connection string is

    TCPIP[board]::host address[::LAN device name][::INSTR]

    and the GPIB is

    GPIB[board]::primary address[::GPIB secondary address][::INSTR]

    just open the VISA session with the correct type and you should be using that communication channel - everything else (you hope) remains the same. The SCPI instruction set you use should not change. Look at the Agilent 34401 instrument driver that ships w/LabVIEW - once you define the VISA session resource, everything downstream is pretty much the same.

    Mark

  11. index.php?app=downloads&module=display&section=screenshot&id=165

    Name: Document Generator

    Submitter: mesmith

    Submitted: 18 Feb 2011

    File Updated: 25 Feb 2011

    Category:General

    LabVIEW Version: 2009

    License Type: BSD (Most common)

    Document Generator v1.0.0

    Copyright © 2011, Mark Smith

    All rights reserved.

    Author: Mark Smith

    LAVA Name: mesmith

    Contact Info: Contact via PM on lavag.org

    LabVIEW Versions:

    2009

    Dependencies:

    None

    Description:

    This class is used to generate a single summary document for any folder containing LabVIEW elements (controls, VIs, projects, classes, or libraries (lvlib)). It recursively traverses the folder structure and reads the documentation from all VIs (including custom controls) and the documentation attached to all front panel controls. The class uses that information to build a document in HTML, RTF (rich text), or plain text (ASCII) formats.

    HTML is the preferred format, since this is the only format that includes lvclass descriptions and also supports the creation of a hyperlinked table of contents (TOC).

    The user can choose to

    - use short or long formats - the long format includes the descriptions on all front panel controls

    - Include or exclude custom controls from the document

    - Include or exclude private members of a class or library

    - Include or exclude protected members of a class or library

    One might choose to exclude private and protected members if the intention is to create an API document for the class or library

    - enable or disable a dialog that warns that an existing html file is about to be replaced by the Document Generator

    Installation and instructions:

    Unzip and open the Document Generator.lvclass

    Examples:

    The UI.vi is an interactive interface for creating documents. The Generate.vi provides an API if the user wants to programatically call this utility. They can be found in the Public->Methods virtual folder.

    Known Issues:

    Any documentation read from the source elements (VIs, controls, projects, etc.) is treated as pure text. There is currently no provision for escaping characters that might be interpreted as control characters in HTML.

    Version History:

    v1.0.0: Initial release of the code.

    License:

    Distributed under BSD

    (http://creativecommo...g/licenses/BSD/)

    See link for a full description of the license.

    Support:

    If you have any problems with this code or want to suggest features:

    please go to lavag.org and Navigate to LAVA > Resources > Code Repository (Certified) and

    search for the "Document Generator" support page.

    Distribution:

    This code was downloaded from the LAVA Code Repository found at lavag.org

    Click here to download this file

  12. Hi There!

    At first very good work! :thumbup1::worshippy: This is very handy!

    I've got several suggestions, (I could also help out or so :blink:):

    • In the Section Message Builder, it would be usefull to have a string format one could apply like in "Format Variant Into String__ogtk.vi".
    • Or and <string in> + enum(Type)>.vi: so to string parsed data can be wraped. -> <value><type><string in></type></value>
    • A VariantAsValue -> Cluster, recursion and stuff like this. Like in "Write Key (Variant)__ogtk.vi"
    • And vice versa: with Set Cluster Element by Name__ogtk.vi a name mapping within a cluster, so optional settings would be possible.

    What do you say?

    Best Regards

    Bernhard

    Thanks! This project isn't really on the front of my queue right now, so it will be a few days until I can evaluate your suggestions - just wanted to post this reply now so you know I'm not ignoring your feedback .

    Mark

  13. I'm not saying you should always decouple your UI from the core engine of your code. I'm just saying that if you are passing FP refs to low level code, you are not decoupled.

    I do this all the time in many application. And I know full well that I am essentially tying these two parts together in a way that would not be easy to separate in the future. So, while this might be the right approach in many cases, it cannot be called 'decoupled' if you do this.

    But, in my case I want to move to a system where the UI could be replaced without changing the core engine code. And I want to eventually change to a client-server architecture where the core server can manage several client UIs simultaneously. That requires a fully decoupled UI design.

    I agree - I was trying to make the point that while decoupling the UI code is generally a good practice, one size doesn't fit all and one should consider the requirements of their particular application to decide how decoupled they need to be. Sounds like you've given it a lot of thought and know what you need. I just don't want to encourage anyone to try to decouple UI code where that approach really doesn't fit and would add unneeded complexity.

    Mark

  14. Every rule is made to be broken (when it makes sense). Typically, I would not pass control references from the UI to the "worker" VIs, but I had one application where it made perfect sense. All the application needed to do was open some number of TCP/IP connections (30 or 40 something) and then display the streams to the user. Each stream was independent. So all I had to do was create enough FP indicators, get a ref to each, launch the re-entrant TCP/IP listeners dynamically with a control ref as input, and wait for each reentrant instance to update its corresponding indicator. Any more decoupling (in this very specific case) would have just added unneeded complexity.

    Mark

  15. What card do you have? The cheaper E Series cards have limited DMA channels, and sometimes that can be the reason why you can't get two AO channels at different rates out at the same time.

    What are you outputting to the daq channels? Is it buffered? If the rates are not too fast and non-buffered, you could output both the channels software-timed in the same loop.

    Neville.

    I'm not sure if DMA channels are the limiting factor - I would think that you can't have more than one AO rate because on most systems you can only have one sample clock for the AO generation. Most DAQ devices will have a single AO buffer. The DACs get data from the buffer on each sample clock tick so all of the DACs get data (and generate data) at the same rate. This is true for hardware timed generation. I can't comment on software timed as I don't use it (except for static voltages).

    Mark

  16. You can use two channels for continuous generation, but you generally cannot have two tasks (which you would need to have to enable two separate triggers) on one subsystem - this is from the DAQmx Help:

    "For most devices, only one task per subsystem can run at once, but some devices can run multiple tasks simultaneously."

    What they mean by subsystem is the AO or the AI or the counter on a general purpose DAQ device. So you can have independent AO and AI in two tasks, but not two AO tasks.

    So, if you only have one card available, you'll have to figure out something else. For instance, if the offset between signals is constant (or can be set before the generation begins), you could create a two channel signal where one is just padded with zeros until it needs to start. If you really need a second signal to start at an arbitrary time, you'll probably need another card.

    Mark

  17. I was wondering if I could get some opinions on what you thought of starting and stopping NI Daq Tasks.

    I am currently starting the program, I go to an Initialize state where I use the Start Task vi to feed the tasks into a shift register. Then when I want to read (after person presses the Start button), I go to the read state, unbundle the task, use the read vi, and the bundle it back up. So basically the task is open all the time. When I close the program, I stop the tasks, clear the task and exit.

    I am seeing the computer slow down a little over time, we are running 300+ tests per day. I have checked under Task Manager, and I am not memory leaking or running the CPU at anything above 5% when it is running.

    Would it be better to start the task when I want to read and then when the test is aborted, passes or fails, stop and clear the tasks and then repeat for each test?

    But I dont want there to be a "long" pause when the user presses the start button, as I will get complaints from the operators that "I am ruining their life my taking money out of their pocket waiting for the stupid test machine" (think I have heard that before??:P).

    Thanks.

    I think you're pointed in the right direction, but you need a little more granularity of the DAQmx task control. So, when you set up your tasks, instead of calling Start Task, call the Control Task function/VI with Action set to Commit. This transitions from the Unverified state to the Committed state - these transitions are the ones that tend to take a long time while they find and reserve resources and would annoy your users. So now you have Committed tasks in the shift register. Now, calling Start Task on the Committed Task just transitions from the Committed to the Running State, which should be very quick. Calling Stop Task transitions from the Running back to the Committed state (again, should be quick). Don't clear the task at this point - continue to stop and start the task until you're done with it. Only when you want to exit should you clear the task. It's been my experience that this technique leads to the best performance. See the DAQmx Help -> Task State Model for more details.

    Mark

  18. You can probably use the socket handle to see if data exists on the connection - you can get the OS socket handle with the vi.lib\Utility\tcp.llb\TCP Get Raw Net Object.vi. Then use that socket handle and call the winsock recv() with the MSG_PEEK flag (assuming you're on windows). This might work to tell you if a TCP connection created thru the LabVIEW function has data available.

    Mark

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.