Jump to content

ned

Members
  • Posts

    571
  • Joined

  • Last visited

  • Days Won

    14

Posts posted by ned

  1. Hello,

    I'm wondering how I should go about creating variables in LabVIEW, I'm trying to do the following C++ code in LabVIEW. Its a very simple to do in C++ and I'm sure it is in LabVIEW, but I'm new to LabVIEW and haven't came across this situation yet.

    There are no variables in LabVIEW, just values on wires. Given the very limited amount of C++ code you posted there's no way to say what a good direct equivalent would be in LabVIEW; you need to provide more context. What are you actually trying to do? Post the LabVIEW code you already have and explain what it is you need it to accomplish.

  2. I've been reading a book on developing APIs (Practical API Design - Confessions of a Java Framework Architect) that touches on the difference between source compatible code and binary compatible code. In Java there are clear distinctions between the two. If I'm understanding correctly, in Java source compatibility is a subset of binary compatibility. If a new version of an API is source compatible with its previous version it is also binary compatible; however, it is possible for an API to be binary compatible without being source compatible.

    I think you have it backwards here - binary compatibility is usually a subset of source compatibility. An API which is source compatible but not binary compatible means that you can't run your existing binaries with the new API, but you can recompile your code without modifying anything. Binary compatibility means you can install your new API and your application continues to run without any changes. If a change is significant enough to break source compatibility then it almost definitely breaks binary compatibility as well, but it's easy to have a minor change that requires recompiling but no changes to the source.

    My initial reaction was that Labview's background compiling makes the distinction meaningless--anything that is source compatible is also binary compatible and anything that is binary compatible is also source compatible. In fact, I'm not even sure the terms themselves have any meaning in the context of Labview.

    After thinking about it for a while I'm not sure that's true. Is it possible to create a situation where a vi works if it is called dynamically but is broken if the BD is opened, or vice-versa? I'm thinking mainly about creating VI's in a previous version of Labview and calling them dynamically (so it is not recompiled) in a more recent version.

    A better LabVIEW example might be LV 8.5.0 versus 8.5.1, and versus 8.6. Code written in 8.5.1 is source compatible with 8.5, but code written in 8.6 is not. None are binary compatible - a recompile (even though automatic) is always required, and the matching runtime version is necessary to run an executable. I don't think you ever get binary compatibility between LabVIEW versions, but I've never tried the situation you described. NI works hard to ensure source compatibility between versions by providing translations for functions that change - like the "Convert 4.x data" option on typecast - but without those translations, source compatibility would break.

  3. It looks like the code in that post on the NI forums is quite old, and you definitely don't want to use Winsock directly (or through ActiveX). There are shipping examples of creating a server, under Networking->TCP & UDP, which use the LabVIEW primitives.

    As far as determining the protocol that your client uses to exchange data, you'll either need to find some documentation, or spend some time with a packet sniffer (I'm sure someone else can recommend one for Windows, my experience is with tcpdump on unix) to try to reverse engineer the protocol. There is no such thing as "Winsock standard" handshaking.

  4. We've also used Kvaser CAN devices with LabVIEW. They work OK. The programming may take a bit more time with them compared to the NI hardware.

    The CAN Frame Channel Conversion Library was quite useful in that specific instance. I don't remember why we didn't use the CAN Channel API. Was it that it's only for NI devices?

    Yup, the CAN Channel API is only for NI hardware. I recall doing something similar once: reading raw CAN frames from a KVASER card, writing them to one of the NI-CAN simulated interfaces configured for frames, and reading them back from the other simulated interface configured for channels. This was actually easy to set up, and I had good experiences with the KVASER hardware and software tools.

  5. I'm working in a food plant and we are required locate the PC 100ft from the USB Daq device during testing (Stay out of production's way).

    The client bought a USB to Ethernet converter box to do this. It was tested with the system and it seemed to work.

    The device is made by "Black Box"

    Single USB to CAT5 Extender, 50 m

    IC244A-R2

    http://www.blackbox....-50-m/IC244A-R2

    Sorry for stating the obvious here, but your client is aware that USB to Ethernet is not the same as USB to Cat5? If they're trying to use an existing ethernet network it's not going to work - they need a dedicated Cat5 cable just for this device.

  6. I haven't done it before (so feel free to disregard the post coming from someone that ignores plainly written requests) but a friend of mine did. The advice I'd try to follow if/when I volunteer would be to get everything at home & work done before it starts because you're going to be <colossal understatement> busy </colossal understatement>. The spouses created a "widows" club... It's a fantastic concept that your team will strategize/design/fabricate/build/test/debug/practice a robot in only six weeks.

    Are you volunteering just for the competition, or to help mentor a team? If you're just helping at an event, it's a lot less work than jcarmody suggests. I mentored a team at my former high school last year and was there 2 nights/week from the kickoff in January through the regional competition in February; the team also had another LabVIEW mentor. I also attended one day of the regional competition with the team and found out too late that the overall event would have benefited from an on-site LabVIEW expert to help during the practice day when the teams are trying to make sure everything works.

    Last year was the first year LabVIEW was an option for programming the robots and I didn't see any teams using complex code, maybe they'll be more advanced this year. Teams don't need to write that much code because the framework that NI provides is nearly enough to run the robots. I think it's more helpful to have experience working with high school students (and I can't provide much advice there) than it is to have knowledge of advanced LabVIEW concepts. It will help to understand the provided framework, but unfortunately I don't know of any way to get access to it without the FIRST-provided CD, and you can't install the FIRST version of LabVIEW and the standard version side-by-side. Last year's framework used global variables and VI server to abort a running subVI in a way that simplified the code for the students but was puzzling to an experienced LabVIEW programmer. If you can't get access to an installation of the FIRST software, reading the documents or viewing Ben Zimmer's FIRST TipJar videos is a good start.

  7. I just noticed that in LV8.6, a flush of a single-element queue seems to be considerably faster than a preview. Is this expected?

    Thanks,

    Gary

    This might have to do with data copies. Depending on what you do with the previewed queue element, and the internal queue implementation, the preview may require creating a copy of the element so that the original element can remain in the queue. When you flush the queue no copy is necessary since that element is no longer in the queue.

  8. I didn't use the entrance that called Process Variable in the PID VI. That's where I suppose to attach the "error". But when I do that the pump stops working.

    You haven't understood how PID works at all. You MUST use the Process Variable input; in your case it's the measured height of liquid in the tank. The PID VI calculates the error internally by finding the difference between the Process Variable and the Setpoint.

  9. I haven't taken the CLD so I don't know what the requirements are, but the logic in your Update_LEDs is quirky. For example, why use two index array nodes in sequence to extract a single element, instead of just wiring both indices to the index array function? Also, to find the first true boolean in an array, consider using Search 1D Array instead of a for loop. Finally it seems to me you could avoid rewriting the boolean array twice in the two for loops (one inside a case structure, the other outside).

    • Like 1
  10. So I'm guessing using queues for applications that run 24 hrs a day is probably a bad idea.... I've written programs for controlling processes that would buffer up a bunch of data in a queue, analyze it, and adjust control parameters on-line then flush the queue. The whole time I'd assumed this memory was freed up, but instead I created a huge memory leak. Awesome :thumbup1: . DEfinitely going to go about it some other wat next time....

    Have you ever actually had a problem with your queues in the past? I think you've misunderstood something here. There is no reason not to use queues in applications that run continuously. You're not creating a memory leak since LabVIEW hasn't lost track of that memory; you've just "reserved" that memory for the next time that data needs to be put into the queue. Unless you actually need that memory back immediately for some other purpose there's no problem.

  11. Hi,

    after couple months not using Lavag, I am really supprise with the new interface. It looks need but I still like the old one when you can see all of new topic and new update message. But one trouble thing for me is I cannot access to my all account anymore. I tried with all stuffs like security questions which I belive I answer them right but it turn out a wrong answer. Is there anyone should I contact with to ask for help about this? My account is Thang Nguyen.

    Best regards,

    Thang Nguyen

    Have you clicked the "Forgot my password" link? That should send an email to your registered address containing a link that will allow you to reset your password.

  12. This is what LV help says:

    A reentrant VI can have dynamic dispatch terminals only if the VI shares clones between instances. This VI preallocates a clone for each instance.

    To fix this issue, you must either change the terminal in the connector pane to not be dynamic or change the reentrant execution in the VI Properties dialog box to Share clones between instances.

    Do you know why this is so?

    Dynamic dispatch means that you don't know at compile time which VI will execute, and it would be impossible to preallocate a clone of an unknown VI.

    • Like 2
  13. Does LabVIEW access the NIC drivers directly or

    does it use an additional interface layer?

    If I write/compile my own TCP functions in C and I call them

    from LabVIEW as external code, will that improve the efficiency

    of the TCP connection between the two machines?

    It's unlikely that you could improve throughput by writing your own external code, unless you think you can do better than your operating system. From an NI employee in this thread: "The networking primitives are a thin wrapper around the OS's networking stack."

    Try transferring your data in larger chunks, so that each packet of data has a greater ratio of data to overhead.

    EDIT: just to follow up on Gary's suggestion, take a look at this VI from NI for disabling the Nagle algorithm on a single connection.

  14. Most likely the problem is that your native mail client supports authentication, while the NI example does not. When using authenticated SMTP the mail server is willing to accept mail to any destination because it knows who the sender is and that you have permission to use it. Without that authentication the mail server will only accept mail for a limited set of addresses (most likely only mailboxes in that domain) because otherwise spammers will use it to relay mail.

  15. QUOTE (jdunham @ May 7 2009, 02:04 PM)

    I wish that when replacing Multiply (for example) with Compound Arithmetic, then the multiply option would automatically be chosen.

    Yes - and on a related note, I wish that when going the other direction, replacing a two-input compound arithmetic block with a simple boolean operation, the replace menu would come up with the Boolean palette instead of the Numeric palette when the function has boolean inputs.

  16. QUOTE (jonmcbee @ May 12 2009, 06:55 PM)

    I am trying to implement a plugin architecture and am starting to feel like I am missing something. I want to be able to drop a vi in a folder and incorporate it programmatically. To do this I have to use a strict type def to the plugin vi so that I can call it using a call by reference node. This seems to defeat the purpose of the plugin architecture because I have to know what the strictly type defined vi ref is in the main code. What I would like to do is to see a vi in the plugin directory, and be able to get the strictly type defined vi ref from the vi programmatically. I cannot figure out how to do this. What/where am I going wrong?

    You can get a strict type reference to a specific connector pane pattern by creating a constant from the "type specifier VI refnum (for type only)" input of Open VI reference; then right-click on it the constant, browse, and select a VI with your plugin's connector pane. Now you don't need a static reference to your plugin, you just pass the appropriate path to Open VI Reference to open that specific plugin and you'll have a strict reference to it. If the connector pane doesn't match you'll get an error.

    However, without knowing too much about your application, be careful of trying to make your code too generic. You'll never get your framework generic enough to cover every case without exceptions and yet still be specific enough to be useful, and is it really that much more difficult to distribute a new application than it is to distribute a new plugin? I think you're running the risk of having exactly the same problem again, just twice as complicated - you'll end up distributing lots of slightly customized plugins to your customers instead of distributing slightly modified applications. See if you can change your development process instead; for example, you might be able to use conditional disable structures for adding custom code for a particular customer. The one exception I can see here is if you absolutely cannot shut down your code in order to update it, but I'd be really hesitant about using this sort of system to do anything that critical.

  17. QUOTE (pete_dunham @ May 5 2009, 12:32 PM)

    I got stuck when I was changing my code to a (strict) reference throught my Class Specifier constant. I kept looking for different ways to get this Constant strictly defined...

    I ended up having to change my Class Specifier Constant to a control, and then click on the control and choose "Include Data Type" and then change this control back to a constant. This seemed like a long way around...is there a more straightfoward way?

    Generally I right-click on the block-diagram terminal of a control or indicator, choose create->reference, then right-click the reference and choose create->constant in order to create a strict type specifier constant.

  18. QUOTE (jlokanis @ Apr 17 2009, 06:59 PM)

    5. In order to connect to your EXE, you must provide both the actual IP address of your machine and the VI Server port of you EXE. An easy way to get the IP is to wire the output of the 'String to IP' function into the input of the 'IP to String' function, set the use dot notation input to TRUE and do not wire anything into the 'String to IP' function. Wire the output of 'IP to String' into the machine name input on your Open Application Reference function.

    You should be able to wire the string "localhost" or the IP address 127.0.0.1 to get a connection to the local machine, rather than going through String to IP and back.

  19. QUOTE (torekp @ Apr 17 2009, 11:37 AM)

    ...In the future, I'd also like to get occasional updates to display on the user's computer. There is no urgency or determinism necessary in this, and it is OK if a few updates are missed. The user's computer has little CPU demand.

    ...

    If TCP/IP or some other alternative would create less burden for the DAQ computer, please let me know. Another alternative I have thought of is to write some binary data to file on the DAQ computer, and read it from the user's computer. Whatever is easiest on the DAQ computer's processor and avoids interrupting the data acquisition and computations, that's what I want.

    This sounds like an ideal use of UDP in place of TCP, since you don't need determinism or guaranteed transmission. It's simpler to set up a UDP listener because there's no connection - kind of like sending a fax instead of calling someone on the telephone. Your DAQ computer sends a UDP packet whenever it wants to display an update; your user computer listens for UDP packets and displays them upon reception.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.