Jump to content

Neville D

Members
  • Posts

    752
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Neville D

  1. Dear all,

    I would like to know what factors affect the speed of how fast a vi respond. I am trying to find out the time between sending a digital 5V TTL signal and then writing down the time in a file and after that asking the vi to send out a TTL signal back. (Since my final goal is to make it finish doing everything in the order of nano second) Is the speed of the whole process affected by the I/O card, the computer's speed and any other thing else? I am using PCI-6534. I saw the data sheet on the web that it works at 20MHz. How can I maximize its speed?

    Thank you very much for your attention!!  :)

    Ayumi

    5906[/snapback]

    Dear Ayumi,

    I don't know what platform you are using, but if its PC running Windows, then forget about deterministic nanosecond timing response.

    If you use LabVIEW-RT on a PC or PXI target you should be able to get loop rates of the order of the high kHz or even a MHz.

    The fastest and most reliable platform would be the compact-RIO running LV-RT on a FPGA.

    Note that writing to file, or displaying data in plots would affect loop speed as well. Best way to do timing tests is to repeat the test about 10,000 times or something like that and then calculate and display (or save to file) the average speed.

    See the NI site for some rough timing comparisons.

    Neville.

  2. Hi,

    I am relatively new to LabVIEW (just 2 months of experience). I have created a .NET class and I was trying to instantiate that class in LabVIEW using the .NET constructor function. I have a .NET array as an input in the constructor and when I pull that class in LabVIEW, LabVIEW gives me a .NET reference instead of an array input. According to documentation, all .NET array classes (System.Array) are mapped to native labVIEW arrays.  Am I doing something wrong here? Please help me.

    Thanks,

    Srik

    5850[/snapback]

    Hi Srik,

    No offense, but why don't you implement a pure LabVIEW solution? That should take care of all the headaches .

    LabVIEW has a rich set of array functions, if that is what you need..

    Neville.

  3. The only other solution i know of is to save all the vis for a given module/driver to their own llb, renaming them all with specific pre- /suffixes. I cannot find a way to do this programatically and really don't care to do it all manually for each vi.

    Suggestions?

    Michael

    5841[/snapback]

    Hi Michael,

    There is a tool on the OpenG website that renames a bunch of VI's.

    Here is a version written by David Moore.

    Do the following:

    1 Place the rename.llb in your LabVIEW/Projects folder. This way the "rename" function will appear in your Tools menubar. You might have to restart LabVIEW to make it appear..

    2 Open the top-level VI (this is very important!)

    2 Start the "Rename Main" from the Tools menu, and give it the suffix or prefix you want to rename the project with, and let it rip.

    Be sure to try it out on a small dummy project first!

    Backup a copy of the project before you try it as well, just in case the renaming screws up all the links.

    Another approach is to open the top-level VI and

    Save with Options>Development Distribution

    This will save all the VI's used in a separate location, and then you can build your plug-in from there.. (I am not exactly sure I understood what you are trying to do).

    Enjoy!

    Neville.

    Download File:post-2680-1124988945.llb

  4. Hi Freinds,

    I want to acess the data of one Vi (which is local to that loop) in the other loop ,How can i do this ?

    My both Vis are running in parellal.

    Harsh

    5827[/snapback]

    There are a number of ways:

    1 Set up a Q, and write data in one VI and read a Q element from the other VI.

    2 Use a Notifier. When you have data, notify it to the other VI. With notifiers there is no caching of data (like a Q) so if you don't listen when its sent, it will be missed.

    3 Use a Global between the two VI's.

    Neville.

  5. Thanks Neville D,

    that seems to be a solution to me. Nevertheless, 'googleing' the name led to some strange results ;) , and also '+serial +server +labview' is a neverending search (>25pages). Please, can you give me a hint, where to find the mentioned freeware? Up to now, i don't have any experience in programming Q's - so having a look at it will help me :) ...

    5790[/snapback]

    Here is a link to Scott's site: Check out the Serial Terminal VI's. He does not have a Q architecture in them, but there are a lot of examples in LV that you can take a look at.

    http://hannahsmac.magnet.fsu.edu/labview/vi_library.html

    Neville.

  6. Dear all,

    Is it possible to make PCI-6534 to read the input which is around 200mV? I read the datasheet which is available on ni.com and it says that PCI-6534 is 5V TTL. Does it mean that there is no way for it to trigger a reading as long as the voltage is smaller than 5V?

    Thank you very much for your attention!!

    Ayumi

    5745[/snapback]

    You could continuously read the input and discard data that doesn't fall within your criteria (200mV or whatever).

    Neville.

  7. @kel712

    Now i got a closer look to your proposal. It's interesting - but not the thing i was looking for. I want to know, if it is possible, like for DAQ cards or Buttons of the frontpanel, to create an event for the serial port, which can be handled with an event case.

    I'm looking for something like this:

    RS232 -> any char arriving -> event triggered -> at some time the event is handled by an event case (dynamically ?? static ??) -> spare time for the program ...

    5766[/snapback]

    I don't think you can program serial communication that way yet.. but look at Scott Hannah's Serial Server which is freeware. It will give you an idea of how to configure serial comm.

    Basically, trigger off of "Bytes at Serial Port" being non-zero. If it is, then read serial, else loop back.

    Put serial reads/writes in their own parallel fast loops (10ms or less loop time), and the UI (display, parsing etc). in a slower parallel loop.

    Pass data between the two with Q's. When data is read by the serial loop, throw it on a Q, and then the UI slow loop reads Q elems, and parses them, at leisure.

    Neville.

  8. Dear all,

    I am trying to use "Measurement and Automation explorer" to configure a piece of device PCI-6220 with accessory SCB-68. I have tried running self-test and it was ok. When I tried creating a new traditional NI-DAQ virtual channel at "Data Neighborhood" and I chose "Analog Input" --> "Voltage", but there is no hardware available for me to choose at the last step. I would like to know whether I did anything wrong (e.g it shouldn't be "virtual channel", etc).

    Thank you very much for your attention!!

    :headbang:

    Ayumi

    5672[/snapback]

    Ayumi,

    the 6220 is an M-series device.. one of the newer ones released at the time DAQmx was introduced. It will not work with LV 6.1.

    Neville

  9. Thank you for your reply!

    I cannot find any examples for DAQmx. Do you mean the examples under "Help" --> "Find examples" in LabVIEW?? There is no NI-DAQmx...

    Ayumi

    5701[/snapback]

    Hi Ayumi,

    DAQmx was released starting with LV 7.0. Older hardware will work with the old-style DAQ as well as DAQmx.

    Newer hardware will ONLY work with DAQmx. You should check your hardware, and if it is newer, then it may not work with LV 6.1

    Newer versions of LV will continue to support the older style DAQ as well, so as not to break your old projects.

    A good way to know if your particular device is supported in LV 6.1 (old style DAQ) is to:

    open NI-MAX>Devices & Interfaces>DAQ & see if your card shows up there. Do a "refresh" (F5) just to make sure.

    If you don't see it there, then it is a newer device supported only by DAQmx.

    Neville.

  10. i tried but failed. how to read word files and display their content to my labview front panel?

        i used Activex controls. But when i run my programme, it did show the content in my panel, and word also was opend by my Activex command. U know, i just wanna show the file's content, not to open the file in the Microsoft word.

        So how to read and show word file correctly?

    Thanks!

    5711[/snapback]

    :blink: I am not sure why you want to open a word file to display in LabVIEW.

    What is easier is to save the information as a text file (with extension *.txt) and then just read in the text with the read file VI's. No need for complicated ActiveX calls.

    Display it as you wish after that.

    Neville.

  11. Wow, it worked! I wonder though why LabVIEW uses ** instead of ^... it would make much more sense to use the former since it works in the Eval f(x) methods.  I am very curious as to how ^ operates in expression nodes, since I did receive output.

    Thanks for the help  :worship:

    5696[/snapback]

    No problem. Somewhere in the LV documentation there is a page showing all the operators allowed for the formula node & expression node.

    N.

  12. Hello everyone,

    since we have to set up very urgent a new computer equipped with an DAQmx (NI-PCI-6221) board instad of an old DAQ (ATI-MIO16E-10) board, it would be very helpful, if someone would have already written VIs which can replace one-to-one the old VIs (which appeared in the "Ni measurement/data aqquisition" menu in LAbview)

    MAybe this a very simple task, but since I am not so experienced with Labview, I am looking forward to every comment.

    S.

    5692[/snapback]

    Use the DAQ assistant in the DAQmx pallete, answer all the questions in the pop up menus. Then right click on the generated express VI, open it up (convert to VI) and look at how the VIs are set up inside.. use that as an example to write your own code.

    It is difficult to find one-to-one correspondence for all the various combinations of DAQ processes, AI, AO, DIO, Counters/Timers, Triggering etc etc. without knowing any specifics about your application.

    It will take a while to convert to DAQmx, but it is worth it.

    Neville.

  13. How would you express exponential expressions using ONLY expression nodes (found in numeric on functions bar)? I have tried 10^x, but the values are not desirable:

    1------[10^x]-----11

    2------[10^x]-----8

    3------[10^x]-----9

    4------[10^x]-----14

    5------[10^x]-----15

    6------[10^x]-----12

    I have tried exp(), but it doesn't work...

    ...

    :wacko:

    5693[/snapback]

    Try x**y (where y is the exponent).

    Neville.

  14. Thanks for your reply.

    I do not intent to create exe out of my plugins. So i do not have plugins.exe as you mentioned. My plugin folder contains the VIs (and their support VI's) that i want to add (to my main application) during the course of time without changing(rebuilding) my main application (the loader VI, which is loader.exe). My problem is that, I have been able to acheive this functionality with VIs, However when i built an exe i do not get desired results.  An example would be wonderful.

    Hope i am understood.

    5614[/snapback]

    I am not sure, but you may not be able to call a VI from an executable. Just build your plugin VIs into an exe as well, and then try it. It should be an easy test.

    Neville.

  15. Hi,

    This is my first post to this forum.  I have spent some time going through the forum am now ready to post!.  I feel myself luck to have found this forum.

    Ok, I am trying to build an exe out of the plugin example that ships with labview 7.1.  Though the plugin example works well, when i built an exe, it dosen't seem to work.  I mean plugin VIs are not detected.  I have a hunch that path needs to be considered when building an application with Plugins.  However even after hardcoding the plugin directory path in my VI, i am unable to get the desired results. Is there anything i am missing here ?.

    Thanks in advance.

    5597[/snapback]

    Say your plugins are in plugin.exe, and the vi is First.vi. Specify absolute path as C:\plugins.exe\First.vi. Don't forget that the path is now *.exe not *.llb as before.

    Hope this helps.

    Neville.

  16. Hello everyone,

                          The problem I have is when I place a DAQ assistant (express.vi) on a block

    diagram I am unable to open or configure the channels(voltage). Is this a software or hardware problem ?

                                                                        Kirk :headbang:

    5594[/snapback]

    Maybe you don't have the required Hardware installed in your PC? DAQmx can be very sticky about configuring channels that don't physically exist.

    You might play around with listing virtual chans (by simulating the hardware in NI-MAX).

    Neville.

  17. Hi,

    I have the multiple Analog Output circuit as you can see in the picture. The strange thing about is that I have around every 2 sec a cpu usage of 99%. The paradoxically thing is that the system thread uses that and not labview. But as soon as I stop the vi the cpu usage is 0%. And that doesn't happen with other VI's running, so it seems to be a problem of my programming.

    But what's the problem ??

    I also tried other VI's with Analog Output. No problem.

    5571[/snapback]

    I am not sure why you have separate AO VI's. Why don't you combine all the AO's on Device1 and all the AO's of Device2 on a single AO output Multiptle Chans Single Sample?

    As far as I remember, writing vals to the AO uses the onboard FIFO of the DAQ card. Using the 2 DAQ chans, but still having only one physical FIFO means some magic has to be performed at the driver level.

    You might want to clean up the locals in your while loop as well. They seem unnecessary & might cause race conditions (they also make extra copies of the data). Wire the controls and indicators directly.

    Neville.

  18. I do have access to the Application Builder, and I have created executables.  However, whenever I have done this, it required the Run-Time engine to be installed.  I'll just have to read up on the App Builder and figure out what I need to do.  Thanks for the help

    <Edit> Looks like DLL's created by the App Builder require the run-time engine as well.  I guess I don't really see the point then, but now I'm in the wrong forum.

    5561[/snapback]

    FYI, if you go to the Installer settings Tab of the App builder, it allows you to add components of the LV Runtime Engine to your installer. Then once the whole installer has been built (with your selected components of the LV RTE), you install your app on a new machine. This installs the RTE, puts your executable in the folder you specify in the build etc etc. Very professional!

    I don't see the confusion.. Obviously LV needs the RTE to display the buttons etc (at the most basic level), and these are all added to your target machine at install time. Easy as Pie!

    Neville.

  19. Thanks for the input, that's a good idea.  I personally have never created a .dll from LabVIEW so there will be some learning there.  I would still like to see an NI document that describes how those files are layed out, for curiosity.

    Thanks again

    5525[/snapback]

    You don't need to create a dll. What I meant was you can make an executable out of your file read code which could then be installed on any other machine. (You don't need the whole LV install to run it). It is fairly straightforward, and can be achieved in a few mouse clicks.

    I don't know what LV package you have but you would need the Application Builder (to be able to build an executable).

    This is not part of the LV Base or Full Development System, but could be purchased separately. It is part of the Professional Development System

    As to the datalog file format, that will never be documented, because then it becomes difficult for NI to make changes. Again, the idea is ease of use. It would be like documenting the C code behind LabVIEW. We don't need to know, and we don't really care either. All we users care about is that the datalog files be readable between LV versions, and if not, then document the workaround to make that happen.

    Hope this helps.

    Neville.

  20. Has anyone created datalog files in another program (such as C) that is readable in labview? 

    5500[/snapback]

    :blink:

    You could probably hack through the formatting and figure it out, and then spend a few hours writing the C code.

    But WHY would you bother?? Just write an LV program, build an exe and hand it to your C-user. It will make everyone's life a lot easier.

    The idea behind the datalog files is to make them easy to read and write to using LabVIEW. If you need to interface with other code, you are better off defining a reasonable format and then writing the files as tab-delimited ASCII text (as a first start) or binary (if you have storage space constraints).

    Also, I have noticed that LabVIEW 7.1.1 creates a different file structure than LabVIEW 7.0, but they seem to be compatible with eachother.

    5500[/snapback]

    Yes, you hit the nail on the head. The file format can (and most probably will) change between different versions of labview. Newer versions will be able to read files created with older versions, and if there is a change, there will be NI documentation to support it.

    Hope this helps :rolleyes:

    Neville.

  21. For a specific application, the measurement file is more than 30MB long, containing approximately 850.000 records!!! The acquisition rate is very high, 250 samples per 50ms. No, it can't be reduced at all! 

    5486[/snapback]

    I think binary files would be a better way to go. The files are anyway NOT human readable (if you can't open them up!!). This will make your files much smaller and your database easier to handle.

    If there is any specific info that you absolutely must be able to have human-readable, just throw that in the file name.

    For example: Coil_01_2004_08_03_1_30_pm_First_Run.bin

    It will be a bit of work to write the acquisition and file-read VI's, but once debugged, you don't have to mess with them again. Look at the binary data acquisition VI's as well for ideas.

    You could break up the number of records in a single file, have individual databases by month or year to limit the size of your database.

    Alternately, you could have a summary of the data go to a database, i.e., max/min/median thickness for the day or something like that.

    Face it, 99.99% of the time, nobody is going to look at the reams of raw data. I used to work in the Fuel-Cell industry where they had a mania for maintaining raw data (2Gigs per day). I found in a year's time, NOBODY bothered with looking at it! They were only interested in anomalies (deviations from the norm). That might help you to think about what is important to look for and only store that bit of the information, rather than reams of raw data.

    Cheers,

    Neville.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.