Jump to content

EJW

Members
  • Posts

    70
  • Joined

  • Last visited

    Never

Posts posted by EJW

  1. QUOTE (Minh Pham @ Nov 20 2008, 06:52 PM)

    This is probably not the right way of using inplace element. It is design for working with element/structure which has the same size at both the input and output. Operations such as modifying data element within array, swap data between array indexies...without creating an extra memory

    buffer, hence performance is improved.

    Building a new array based on the data from the inplace element (IPE) input node will not help as you are increasing the size of the IPE output, as the result, LV still has to organize buffer for the operation described.

    Actually, i am replacing the third element with new data, same size. I keep the first two indexes and input a new third array to replace the original.

  2. I like to use enums in my programming because of the text output they have when using a case structure as opposed to a menu rings numeric output.

    Is there a way to make a control or xcontrol that uses the menu ring for the control, but outputs an enum style value??

    I know this could probably be accomplished by using the menu rings string property but i'd rather have a separate control i can implement.

  3. QUOTE (guruthilak@yahoo.com @ Aug 6 2008, 06:54 AM)

    Hi,

    There are couple of example under LabVIEW which can provide u a readymade solutions. U may have to tweak it a little bit to meet your requirement (Probably). I suggest you not use those express vi's.

    Go to LabVIEW and "Help->Find Examples" then select the vi which you are looking for

    You can always select the assistant with the left mouse, go to the edit menu, and convert to task name constant. then repeat that process and convert to code or code plus example.

  4. QUOTE (Ton @ Aug 1 2008, 10:06 AM)

    You mean like express VIs?

    I would go for proper DAQ routines. An express VI might load a new task on every iteration, and in compiled mode some optimizations might be left out.

    Ton

    I found something interesting. The machine I wrote the code on uses daqmx 7.4.0f0, my test machine in the lab uses 8.0.1f0 - the problem shows on both machines.

    However, i checked the machine that is actually running the test, which uses 8.0.0f0, and it is not happening on it.

    That is very strange.

    :headbang:

  5. QUOTE (Louis Manfredi @ Aug 1 2008, 10:14 AM)

    Hi EJW:

    Not sure why you're having this problem, and why only in executable not in Development system, but I'd worry about the DAQ Assistants creating handles to Daq processes, and not--for one reason or another--destroying them when done.

    You might want to try opening these up, and breaking them apart into three pieces-- Initialization part to move outside and before your while loop, part that needs to be done every loop cycle inside the loop, and part that closes the processes outside the loop and after. Might not be as easy as I'm making it sound, but if you can do it easily, might be worth a try.

    Best Luck, Louis

    Yeah, that is what i am doing now, i know i have run into issues where one task reads DIO and another writes to DIO on the same device (different lines).

    Also, the other device reads a DIO and an AI, but i dont think that is usually a problem.

    Worst case, I'll string em all together with the errors and let them operate sequentially.

    In some cases i know i have had to actuall insert a stop task otherwise the next task doesnt work. I am hoping to complete this without all the extras!

    EDIT:: I converted the assistants to code only, put them outside the while loop and did the read/write in the loop and the memory leak went away. Now i just

    have to make sure everything is functioning on the machine it goes on. Hopefully there are no issues.

  6. QUOTE (Ton @ Aug 1 2008, 10:06 AM)

    You mean like express VIs?

    I would go for proper DAQ routines. An express VI might load a new task on every iteration, and in compiled mode some optimizations might be left out.

    Ton

    I thought of that , so i added a clear task on the task out of each express vi, which actually caused it to fail on the second iteration.

    I was actuall trying to minimize the amount of code i was rewriting in this program as there is a new version in 8.5 coming in about 2 months,

    and i didn't want to spend a lot of time on this one!

  7. I am updating a labview 7.1 program. I removed all traditional nidaq code and replaced with daqmx code and the daq assistants.

    The program, when compiled (myprogram.exe), has a memory leak. The memory useage continues to grow until you get low on virtual memory.

    However, in the dev mode inside of labview, there does not appear to be a memory leak. The labview memory useage never changes.

    How would i go about finding what is causing this? there are about 4 daq assistants in a while loop waiting on a test to start. these DA's poll various digital and analog channels. Could these be the cause of the leak. If so, why only in compiled version and not the ide.

  8. I have recently run into an interesting problem. I have a new 8105 controller, 1031 chassis, and two 6251 DAQ cards. My labview program on this machine starts up with windows (XPSP2). However, when the program initalizes, it gets a device not present error. If you stop and start the program again, or close it and reopen it, it works fine, the device is there. Similarly , if you take the program out of the startup folder and launch it manually after windows starts, it works fine. It appears to me that windows has not finished recognizing the hardware, or the drivers are not fully loaded by the time my program launches when it is in the startup folder, causing this error.

    Unfortunately, this is a gage in a production environment, so it HAS to start up with windows. The only solution i have found so far is to put a delay of several seconds into the beginnning of the program before configuring tasks. This seems to allow windows and nidaq to finish doing their thing so i dont get the error. Anyone else have this problem or know of a workaround other than a time delay??

  9. QUOTE (crelf @ Jun 17 2008, 02:52 PM)

    Good point! So I called NI, and the response was NO it would not affect the NI warranty!!!!

    QUOTE (Neville D @ Jun 17 2008, 02:57 PM)

    You might also consider that the NI memory will definitely support the extended temperature spec (if you have that option) in harsh environments.

    Neville.

    You know, i have ordered stuff with the extended temp spec, and yet i am still unsure what it is that is different! Do you know?

    Also, alot of memory these days is designed around "gamers" which tend to have HOTTTT! systems. So one would expect the more

    recent memory is capable of withstanding the harsh environment. I have seen analysis enclosures (which contain the pxi system and other electronics) of mine reach 120º F internally without difficulty. All my enclosures have air exchangers, which dont work very well, but my newest one has an AC unit (1000BTU) to keep my precious

    NI equipment comfortable!

  10. Has anyone upgraded memory in their PXI systems? Do you buy the way way overpriced NI memory or do you order something from a respectable company like Corsair , OCZ or Kingston. After all it is just laptop memory!

  11. QUOTE (crelf @ Jun 16 2008, 06:22 PM)

    Normal VI's have a File > Disconnect from Library menu item, but polymorphic VIs don't. If a poly is part of a lvlib, and the lvlib is deleted, the poly is broken with a "This VI is connected with a library that LabVIEW cannot find. Find and load the library or select File»Disconnect from Library" message. I don't have the original lvlib, and there's no File > Disconnect from Library menu item - how do I fix my poly?

    Solution: save it under a new name - the new copy will be disconnected from the lvlib. I'd call that a bug.

    Oddly enough, i hade that happen with the write to spreadsheet file. i lost all my polymorphic instances except one.

    NI had me recopy the original lvlib from the CD to my computer and replaced the broken one with that. All was good.

  12. QUOTE (Michael_Aivaliotis @ Jun 17 2008, 01:02 AM)

    Dude, you can't be serious. There must be like a billion PHPNUKE forums and support sites out there.

    LOL, THAT'S THE PROBLEM!! Have you ever tried to find a needle in a billion piece haystack!

    However, through careful prudence and patience and several reinstalls, i did finally get it to work.

  13. QUOTE (TobyD @ Jun 16 2008, 01:26 PM)

    Seriously?!? :wacko::throwpc:

    SURE! Haven't you noticed, the bigger the OS's get, the bigger the problems!

    As a side note, in my days of AOL (before realizing it was just an expensive portal), i preferred AOL v 2.5, i last used aol when 9SE came out and my neighbor

    wanted it installed. I recommended to her that 2.5 was probably the best as she was having numerous problems with it.

    On the other hand 24.95 a month doesnt sound bad now, considering i pay 89.99 a month for 12Mbps and a static IP !!!!!!!

  14. all these programmers, somone must know somehthing about web programming. I need help with a phpNUKE installation. All the files are in place, i inserted the domain name in the admin.php file and uncommented that section. i updated the config.php file to have the correct information. installed the database, ran the nuke.sql script to set it up, added a user to the database and.......nothing, i get a nuke page that says there is a database error (before i was getting just blank page).

    HELP HELP HELP HELP!!!!!!

  15. Allright, this is a relatively simple application, yet i want it to be MODERN and take advantage of the fact that all my systems now use dual core processors. I understand that the new timed loops in 8.5 have the ability to assign a processor or processor core to each loop. Though i dont understand why its not available on the standard while loop.

    I am working on a new machine that has taken an old test and split it into two stations to do a new part. (It is impractical mechanically to try and do it in one station).

    So, station one tests the top of the part, station two tests the bottom of the part. Both stations are rotating off of a single drive motor with an encoder attached to the bottom of each stations drive shaft.

    I use the encoder as an external timing source connected to CLK0 and i also jumper the input to PFI0 for triggering. Now I am acquiring at the exact rate the motor is going (12 bit encoder - 4096pts per rev).

    Unfortuantely I am only measuring one analog input channel for each test, yet i have two independent timing sources so i have to use two DAQ cards (6251m). What a waste of channels, oh well.

    My program needs to wait for a start signal on each daq card D0.0, now most of the time there is a part in both stations, but when the machine is being run manually or first starting up, there may not be one in one of the stations. When i have acquired my data (4096 points) i need to process this data, write results to a file, display results to the front panel and send an output back to the machines plc as to

    whether or not the part is good/bad or if there was a fault (D0.1 TEST COMPLETE, D0.2 TEST ACCEPT, D0.3 TEST FAULT)

    The processing of the data should probably be done in parallel. I also need to be able to have some front panel control to change a handful of settings.

    My thinking is this... Two Timed while loops (producer loops), set one for core 0, one for core 1. One While Loop to handle Control/DIO/FP DISPLAY/File handling, and two While loops for processing (consumers).

    The timeout state should probably include an event structure to handle FP settings changes, and the timoute event could handle polling the Start lines.

    Upon receiving the start signal i could send notification to the appropriate timed while loop which would cause my data acquisition to occur. The Timed loop would be using an internal clock for acquiring instantaneous single point readings while not actually in test mode.

    When the Read has finished the data is output to a queue which of course triggers the consumer loops.

    Now for the part that i need help with... 1st.. should the consumer loops display my results and do my file writing, or should it be done in the control loop. 2nd. if results are done in the control loop, how do i then let the control loop know i am done and pass the data back to it and so it knows which indicators to use (station 1 or 2). When an individual stations work is done, how do i determine which one is done so as to fire the appropriate D0.x on the right card?

    Also, does using the timed loops like mentioned make since, especially since i am always acquring data, be it single point during a time out or an array of data from an actual test. By the way, i have that functioning.. switching between single point interallly clocked to multipoint externally clocked and back again.

    Attached is an image of what i have started, now some of the code you will see is there just for test purposes only, not actual design. What is not pictured are the two identical (empty) consumer loops with a deque element (array data) in it.

  16. I am trying to take vibration measurements on our old test stands. The program i am writing in LV85 is to replace our outdated labwindows 5 program. I have a new pxi system to replace our outdated 286 machines. (dont laugh). We are taking vibration measurements with PCB accelerometers, typically 100mv/g and using an Althen VIB-RMS 5B module for signal conditioning. The origial system acquired this on an E series mf-DAQ, now i am using a 6221M. I am not familiar with C or labwindows and am unsure how they were calculating average g and peak g from the hardware i mentioned. I would assume (i know) that if i took my analog input multiplied it by 1000 (convert volts to millivolts) and divided it by the accelerometer's value (100mv/g) i would get g. That does not appear to be happening. Now i know the 5B is an RMS module and the accelerometer measures peak. I think i am missing something here that involves either the RMS value of sine (.707) or the square root of 2, or something. How do i calculate my peak g from an RMS value coming in and then how do i calculate an average g. FYI, no vibration toolkit, although it should not be needed as my signal conditioning ocurrs before my daq and not actually using a vibration daq.

  17. QUOTE(ned @ Nov 14 2007, 01:04 PM)

    ::SNIP:: I hope you don't find yourself in that situation. I've converted several applications to RT which depended heavily on passing a cluster of references to every front panel object through to every subVI, and because of this, separating the logic from the user interface was not simple. I think you'd find that replacing all your events would be similarly difficult ::SNIP::

    I think one of the problems people are having here is that everyone assumes that all design patterns are created equal. Such as, they all HAVE to be able to be used in all sizes of programs.

    A simple program, that does not do a whole lot, that only takes 1 programmer a month or less to write, could benefit from using the EVENT Structure as a state machine.

    I have written programs that only require a day or two of programming, and writing it as a state machine took up most of the time. I did rewrite one as an event machine and not only was i able

    to get it to work right without a bunch of hoops to jump through, it took me only half a day or less!

    I agree a complex program that has hundreds of subvis that several people have worked on for 3 years would probably NOT work using the Event Structure. However, some of the failings

    of the structure could be overcome by pushing the R&D guys to make a few changes. These changes could include;

    1.. 'Lock Event' option on any event case, including user events

    2.. 'Flush Events' , think about, if you "created' an event structure in labview, it would be a queued state machine, without the ability to use flush, and deque.

    3.. 'Discard Event' on ALL events.

    4.. Built in error event that could be called, with the option to resume where you left off once the error is handled

    5.. Anything else that would ease the transition from a standard state machine to an event machine.

  18. QUOTE(EJW @ Nov 9 2007, 11:22 AM)

    ::SNIP:: I realize this post may start a forum riot (state diagram diehards), but i am curious to know people's current opinions.

    Well this is certainly an educational topic. At least we are getting good information all in one spot as discussions on event structures seems to be a little thin anyway.

    I guess it comes down to on preference and even program size. I am assuming the projects most people are commenting on are probably rather large compared to the

    one i will be doing. Most likely I will stick to the State Machine, even though as a former text-based EVENT driven programmer, i will muddle through the cumbersome state machine process.

    When it comes down to it, a State Machine IS nothing more than a hard-wired Event Structure. Your data flow still flows to a decision maker which dictates what the next STATE (EVENT) will be.

    Verbage appears to be the only hurdle here as to what it is called, an event or a state. The two really are the same albeit handled a bit differently.

    I think the whole LED thing is a bit too much as well. Oddly the program I am going to be rewriting was originally written with a stacked sequence structure inside a while loop. I think the third sequence

    has another while loop where it waits for a start signal and processed the half dozen or so user controls. That program did NOT save any settings, so if you changed something that wasnt programmed, you had to change it every time you restarted, or someone (ME) had to go back and make code changes, recompile and take back out to the machine. I did NOT write the first iteration of this (LV 5.1.1), but I am going to rewrite it (LV8.5).

    :headbang:

  19. QUOTE(Michael_Aivaliotis @ Nov 10 2007, 03:41 PM)

    I strongly agree with this comment. It really depends on what the code is for. For all customer projects ... :P

    Luckily for me, i don't do customer projects. These are all in house programs designed to run on production gaging equipment with little or no user input, aside from me or another qualified person making a setting or calibration change! ONe nice thing i like about the event structure though is when you respond to a control in the value change event, you have both NEW and OLD data available to you without using locals or shift registers.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.