Jump to content

mross

Members
  • Posts

    532
  • Joined

  • Last visited

Everything posted by mross

  1. mross

    Help!

    QUOTE (335x @ Mar 29 2009, 01:37 PM) 1)What do you mean by "get values?" One way to get values is to right click on an output terminal and create an indicator, then run the VI. There are other ways, but this is common; the most common way is to simply route a wire. There is no need for an indicator for one to "get" and pass data. A wire conveys the information that is generated, that is what they are for. Is that not "getting?" 2)The outputs are whatever they were chosen to be by the author of the VI. The indicator you create (see above) will automatically be the correct form and dimension. Some VIs are "polymorphic" meaning the inputs and outputs can be different depending on how you use it. All of this information is contained in the help docs of properly written VIs. You should turn on Context Help <ctrl-H> and when your cursor is over many items of the front Panel or Block Diagram the Help window will display pertinent information. You must learn to read this information and comprehend it. 3)You have given no reason why you want to use a signal generator so there is no way for anyone to gauge if you need another method. I suggest you search the help files and example code that ships with the software for useful signal generation information. Also there is a huge amount of information at ni.com in the Knowledge Base. You need to start using these sources of information or you will never make much headway learning LabVIEW. And, if you don't do and share your preliminary work, you will soon lose the good will of LAVA contributors. Here is a nice general discussion of how to get help online. http://www.catb.org/~esr/faqs/smart-questions.html 4) This is a forum for discussions of Application Design and Architecture; a more advanced topic than most new users should be concerned with. You should start out in LabVIEW General, or if you see another forum that is particularly well suited use it. LabVIEW General has the most traffic.
  2. QUOTE (speedliner @ Mar 25 2009, 03:27 AM) My first thought is that your scope may not be sufficient. At least you should be sampling in peak detect mode or without any filtering so you see the actual data. you said: "I want to capture and send out a perfect crankshaft signal..." You are going to need to define "perfect" at the outset. You will find out just how illusive perfect is, if you think about it. You need to apply Nyquist to the output - you should be generating resolution 10 times greater than the car would be sensing from a real sensor. A real analog sensor has no digital weirdness at any level of sensing. Your "Analog" output is never going to be perfect. You have to understand that and define what is acceptable - not perfect. Also your scope should be several or 10 times faster than the AO generation otherwise you will not see all the potential errors and imperfections in the output. The trace you show looks like a discontinuity in the Analog output. I can't open the newer files so I have to guess. Your vi's probably run one cycle of the sine wave and then the analog output must stop and be restarted - but the stop and restart takes time. There is usually a way to expose the code (if you were using Tasks you need to generate the LV code), then the parts that Open the AO channels, Configure the AO process and Close the process are pulled out of the looping section. The AO must be "retriggerable," a configuration setting. It is also possible that there is a better VI that makes a continuous AO without this extra work - search the example code that ships with your software and search the knowledge base at ni.com. Don't lose my first post where I describe how to create the signal with an actual crank sensor. It takes less than a week to buy and build a crankshaft simulator to exercise a crank sensor. Mike
  3. QUOTE (iriszhaoyu @ Mar 20 2009, 05:11 AM) You have entered into one of the experiences that cannot be avoided if you are going to do data acquisition, and you will need to learn your way around it - there is not necessarily an easy solution, or the easy solution may elude you for a while. Start out with the Field Wiring white paper from NI: http://zone.ni.com/devzone/cda/tut/p/id/3344 And this one "Five Tips to Reduce Measurement Noise": http://zone.ni.com/devzone/cda/pub/p/id/262 Keithley will mail you an excellent reference if you call and ask them (When I got mine they did not have any way to get it online.): "Low Level Measurements." It is not for sale you have to ask them for it, it was free. I also have another refernce they publish called "Switching Handbook." You have not given any information that allow someone to help you sort out the problem. no circuit, no specifics of the transducer or signal conditioning, nada. You may get more information if you yourself provide more. Absent any ideas what you are actually doing, I can only offer general comments. Use differential measurement systems exclusively. Only use single ended measurement if you have a very good reason; such as you cannot get an appropriate differential transducer, or you cannot afford to buy equipment with enough channels for differential measurement. You can easily waste in time, the cost of better equipment and more channels of DAQ if you are too "cost conscious." In an academic environment it is sometimes possible to borrow equipment temporarily. It is possible that the building in which you operate has a very noisy ground circuit. I had to drive a copper clad ground rod into the earth beneath a lab floor once to get a clean enough ground for DAQ, an "instrument ground." It was an industrial environment and there was a great deal of transient load and unbalanced loads on that grid. You must beware of safety issues when you have a separate ground from the building grid and act accordingly. Even this can be difficult if the earth is partiularly dry. I have seen building grounds require extensive laying of cable and the application of salt to improve earth conductivity. You may have unintentionally created a ground loop in your circuitry - you must understand how to properly shield wires, you should try to use a single point ground as much as possible. These problems are often cause by inattention to detail in wiring. Sometimes circuits "just grow" to become troublesome; then you might need to rethink the whole system up to and including the building grid. Basic good wiring practices, a proper ground circuit, and appropriate other circuit design are necessary and will help avoid these troubles in the future.. Or you may simply have a bit of frayed shielding touching the wrong surface. Good luck, Mike
  4. QUOTE (Fritske @ Mar 19 2009, 08:09 AM) Having designed 7 cam and crank sensors, and testers for them using LabVIEW I understand what you are trying to do. Your simplest solution by far is to make a simple apparatus to spin the actual target wheel in front of an actual sensor. Then your simulation is correct at all speeds. If you use the actual electronics from the automible then you can also correctly simulate fault conditions such as high and low supply voltage, bad grounds and so on. Attached is an image of a simple simulator. Mine also includes a high count encoder for examining the signal output from the sensor in real time in the angle domain (much more useful than time domain if you think about it). If you leave out the ncoder the apparatus is not very expensive ddepending on the motor and drive you use. IF you have no DAQ to acquire from the sensor then you can use inexpensive AC induction motor and drives. If you plan to do any DAQ then a servo is best to keep away radiated and condiuted electrical nioise originating fromt eh frequency drive. If you try to simulate the signal with software you are in for a long job if the simulation is to have any correlation with reality. And if you do simulate the signal you should have a study in hand to base your evaluation of the simulation - meaning actual measurements of the sensor and target system at different speeds, temperatures, and so on. If you can not defend you correlation of the simulation to reality then it is an exersize not good engineering. I also suspect that the time you put into the software simulation will be much greater than building an electromechanical appratus.
  5. mross

    Signal Express

    QUOTE (JohnRH @ Mar 7 2009, 10:34 AM) Jason, I agree that a scope is best. But a good scope implemented in LabVIEW is useful to a point - you have to gauge that for yourself. There was a serviceable 2 channel scope for E Series DAQ. I used it a lot and had modified it to provide more scope features like cursors that returned voltage and time. Unfortunately the M Series is quite different and to my knowledge no one has made a scope example on par with the old one. I would love to know if there is an M Series scope VI that is being shared. I really miss it. Mike
  6. I am not answering your questions because I can't answer without knowing the version of LabVIEW and the hardware you are using. You should always tell these things when asking questions. Given acceptable hardware the answer is: Yes, LabVIEW can do what you have described. I have a suggestion that is different from what you have described. Perhaps you will like it, or maybe not. I personally do not like the idea of tracking the sun by some tabulated data. It is not necessary and this makes the machine dependent on the operator putting in the proper lookup table, and orienting the machine exactly. Instead have the machine search periodically for the angle of greatest power generation. You could have the machine tilt east then west to find the angle of greatest incident radiation then tilt in a vertical plane searching for greatest incident radiation. Then interpolate the best angle at that time. Perhaps interpolation is not necessary if you have sufficient resolution of the angle. Then a few minutes later recalculate a new set of angles and move there. This method embodies what I believe is the best way to approach any design: seek the correct result. In your case you want maximum power generation. The location of the sun is only indirectly what you want because it cannot be inferred in the moment and only predicted based on a great of deal of calculation or measurement before the the machine is implemented. None of the calculation and measurement, and no precise orientation of the machine is needed if the machine can simply seek the greatest incident energy. Mike
  7. QUOTE (diogo @ Feb 22 2009, 06:17 AM) And we are awaiting with anticipation your elaboration on the theme. It is a rather broad stroke to say, "some problems regarding linear interpolation." Kind of hard to get a handle on that. Mike
  8. QUOTE (vieira @ Feb 15 2009, 08:27 PM) Put 10 rectangular boolean switches in a cluster. Arrange them and label them like a phone keypad. Set the switch action to be momentary. I presume you really mean to use controls not indicators. I can't imagine why you would make a keypad as an indicator.
  9. QUOTE (Kubo @ Feb 13 2009, 04:12 PM) You should probably lose the sequence structures. The way it sounds you are using them (I can't be sure - sharing your code is always helpful to get the best advise) they are merely a waste of space and make the code harder for humans to read, the VI will be sequenced by data flow and the wires between the nodes. Using sequence structures is often a sign that you do not yet understand data flow, and are trying to make the VI operate like a non-data flow, text language based program. I find them useful only for initializing things like the state of controls and indicators at start up, and on rare occasions where I need to place a time delay between two other operations, however, since I wrapped a wait for ms multiple in a subVI with an error pass through I now sequence the delay by wiring the error cluster wire. This is neater, takes up less real estate on the block diagram and is recognized to be a better coding technique. Serial comm is one of the odd uses of delaying between steps thereby subverting the data flow to allow time for the serial comm to complete.
  10. QUOTE (hariprasad @ Jan 31 2009, 01:13 AM) You want a graphic diagramming program. Consider the opensource program Dia. http://live.gnome.org/Dia LabVIEW is a programming language for data acquisition and control. It is true the block diagrams are creaetd in LabVIEW, but they are part of the process of programming using a graphic method, as opposed to programming with text input.
  11. QUOTE (Waleed Ali @ Jan 29 2009, 11:47 PM) Yes, with multiple DAQ modules.
  12. QUOTE (David M. @ Jan 28 2009, 06:29 AM) I see you edited out the references to this being for your final exam. Sharp guy.
  13. QUOTE (Waleed Ali @ Jan 27 2009, 10:13 AM) I haven't looked at your code, and I don't know what hardware you are using. The simplest solution is to take all the readings at the highest rate and decimate the results - though I don't know why it would be important to have less data. All the DAQ boards I have used are limited in the variety of simultaneous sampling rate by the number of DMA channels, A/D converters and on board clocks. It is probable that to do what you want will require you to purchase multiple DAQ boards and assign the various rate of acquisition QUOTE (Waleed Ali @ Jan 27 2009, 10:13 AM) Thanks fo ryour reply. actually, this would be finr if i wouldlike to have additional sognals from the same moudule (device) with the same rate. but, in my case I wfant the signals to be aquired from diffrent NI moudules f sampling rate. For this, sampling rate control need to be avaliable for each signal like the one on the front pannel. Can we do that. Thanks I haven't looked at your code yet. But I have some general comments. You could take all the channels at the highest rate and decimate the data. This is by far the simplest way to get "simultaneous" readings. You did not share with us exactly what hardware you are using, what voltage ranges they will sample, and the rates needed, so that limits the precision of any responses to your question. Because taking different rates requires multiple DMA channels, A/D converters, and on board clocks it is likely you will need multiple DAQ modules (you said that is your plan). The boards must have the means to synchronize built in. I am not sure about M Series boards since I haven't done this using them. But with the older E Series you needed to use the RTSI bus, so each DAQ board had to be RTSI capable. Your NI application engineer can help you select proper hardware or comment on the capability of your hardware for the task. NI tech support can also help. Given proper hardware capability to synchronize (including the chassis carrying the modules), you would set up tasks in the Measurement and Acquisition Explorer (MAX) for each board to do its task - a buffered acquisition, then you would trigger them with the same signal. The clocks are quite good so you can use relative time from that point on. You would get each board working alone then work out the triggering details. LabVIEW works well when you test your small bits of hardware and software as you go along - eating the elephant one bite at a time. You did not share exactly how "simultaneous" the acquisitions need to be. If you require very tight synchronicity, more than the on board clocks provide, then I don't know enough to help you with that. Any clock is inaccurate at some level. And, they are imprecise at some level as well. The clocks are only accurate to a point, a fine point, but there is a limit. This is why I say you should trigger all your acquisitions at a particular time and depend on clock accuracy, if that is acceptable. The DAQ hardware was designed to make this simple and effective. Buy DAQ cards with adequate sampling speed for you needs. Buy digital signal processing capability, if that is necessary. When you try to get synchronicity beyond the clock resolution, then you have a much larger task, and it is probably a fool's errand. Probably more important, the hardwired circuits and wiring will have capacitance, and therefore latencies between signal occurrence and signal acquisition due to propagation time through the circuitry. Managing synchronicity is a tough job, don't over do it is it is not necessary Mike
  14. QUOTE (jdunham @ Jan 20 2009, 04:48 PM) Our differences on preferred resume style demonstrate the near impossibility to please every resume reviewer. I think your point is well taken for non-technical resumes, and people with limited relevant experience to the job description. On the other hand, I have read many resumes (though only for engineering positions), and I can't recall an interesting resume that was NOT two pages, or more. I can envision a person with five to ten years work experience that can fill one and a half or two easy-to-read pages with relevant information; relevant for me includes a meaningful objective or summary section, education, and so on. I tend to view graduate work and some undergraduate project work as work experience. Obviously, I am pretty forgiving about resume length. Following is more WRT the original thread than a reply to Jason: What I don't want to see is a pumped up resume - I am happiest when I see most relevant information at the top of the first page. No matter how thorough I want to be, by the bottom of the first page my attention maybe dropping. If I haven't seen things that interest me, the resume may be on its way to the wastecan. A two page resume is so common that I could be foolishly overlooking good candidates, if I was offended by that. Personally, I would rather review an easy to read two page resume with a lot of white space than a one page resume with everything packed in. Other bad ideas are too small fonts and too dense information, no bullets, unexpected formatting from one section to another (be consistent), if there are any typographic errors the resume had better be extremely strong. I bet the vast majority of typos are in resumes not proofread by a third party. That is how they creep into my own resume. It can be so very hard to see clearly what you have written yourself. Also, one resume for all jobs is a bad idea. Taking the time to make the resume highlight experience directly related to the job description is the best thing to do. Plan on writing and rewriting a resume many times. I think I spent a minimum of 30 hours on each the three types of resume I use - the first time I wrote them. I will always review and rewrite to make sure I am not missing an opportunity to make pertinent experience easy to notice.
  15. QUOTE (zmarcoz @ Jan 20 2009, 10:49 AM) The most recent copy had no header, but I think is wise not to expose your address and phone number to the public at large to prevent theft of your ID. You wrote: As a senior project, designed a microcontroller based hand print recognition system. Change to: Designed a microcontroller based hand print recognition system. Reason: A general rule (for resume writing - this is not proper English) is to start a sentence with an action word - a verb form when ever possible. Also try to make every word relevant for the job to which you are applying. No one cares that you did the work as a senior project, therefore that is not relevant. What is relevant is that you designed something. You should consider a resume rearranged to highlight the most relevant information FOR THE PARTICULAR JOB TO WHICH YOU ARE APPLYING. That means you should write multiple resumes. What you are doing with LAVA may be more useful to you than anything a job recruiter can do for you, more than the help wanted advertisements, and more than the online job search websites. Those three things are only responsible for about 30% of job placements. 70% of all jobs are found by networking as you are doing here. Interacting with people on a professional and personal level, and making sure they know what you can do and how to get in touch with you. You need business cards to hand out. Another point then is that your interactions with sources of information like LAVA should be of the highest quality and exhibit the utmost professionalism. Your communication about your private life is probably best left unsaid. I don't mean to be unkind, and I am sympathetic, but you must keep in mind that all the things you post to the internet are potentially reasons for you to NOT BE CONSIDERED FOR A POSITION. You are looking for someone to pay you a great deal of money - it probably costs $180,000 a year to sit a full time engineer down and put him to work. They need to believe they will get that much return on the investment, and a great deal more for the work you do. You may want to consider a less casual conversational tone when writing in areas related to your profession. Perhaps no one participating in this discussion will care about your casual tone, but you can not be certain, and you do want a job sooner than later. Correct? And finally, realize that your worth is as an engineer first, and as the proficient user of a programming language second - or maybe third, or fourth. You should be selling yourself based on the quality of your professional experiences, your education, your communication skills, and your ability to solve problems. If you do not have engineering experiences to explain your value (you clearly do have useful experience), then you must be prepared to describe other experiences that show problem solving ability in the real world. And not trivial at all you must present yourself in a manner that demonstrates you can get along well with your coworkers and superiors, and that you will be an asset to them. Communication skills and "chemistry" with the prospective employer are very important to people looking to hire an engineer. We rarely have the exact experience that a job requires so, we must be able to learn quickly, make good decisions, and get along well with and enhance the abilities of our coworker. You may wish to explore the intersection of neuroscience and psychology as a possible field where your experiences could be particularly useful. Your work with brain/machine interfacing and data collection, cognition and information handling by the brain, and your electrical engineering education could be very applicable to the burgeoning field of neuroscience. If that interests you, you may write me privately. I may have some ideas. Good luck, Mike
  16. QUOTE (zmarcoz @ Jan 17 2009, 12:24 AM) That is a good answer. But you didn't reply with a definition for data flow so I think I was OK assuming you to be a low level novice user who did not understand the distinction. Of course you can communicate between loops. But more likely the person who said "not by data flow" meant communication between two running loops. One loop stop and the data is communicated to another loop waiting to start - data flow. Two concurrently running loops can not communicate by means of input and output terminals. In this case writing to or reading from a functional global is a good way to communicate between running loops. But data flow is still the means by which the program is sequenced. Within the running loops all the nodes will run when they get their inputs and will finish only after all their output terminals are filled. Within each loop a functional global can be written to and called for data - according to data flow - and the information passed and received. Mike
  17. QUOTE (zmarcoz @ Jan 17 2009, 12:24 AM) That is a good answer. But you didn't reply with a definition for data flow so I think I was OK assuming you to be a low level novice user who did not understand the distinction. Of course you can communicate between loops. But more likely the person who said "not by data flow" meant communication between two running loops. One loop stop and the data is communicated to another loop waiting to start - data flow. Two concurrently running loops can not communicate by means of input and output terminals. In this case writing to or reading from a functional global is a good way to communicate between running loops. But data flow is still the means by which the program is sequenced. Within the running loops all the nodes will run when they get their inputs and will finish only after all their output terminals are filled. Within each loop a functional global can be written to and called for data - according to data flow - and the information passed and received. Mike
  18. QUOTE (jcarmody @ Jan 16 2009, 04:52 PM) I knew there had to be a better way! Thanks.
  19. QUOTE (crelf @ Jan 16 2009, 01:41 PM) Yes, indeed. It is hard to say this in a blunt and kind way so please understand I mean this to be kind and helpful, a person who does not understand data flow needs to develop some real applications, have them fall apart, and be forced to solve those problems with minimal outside help. Can you really pass the CLAD and not understand data flow? What kind of test is it? Mike
  20. QUOTE (crelf @ Jan 16 2009, 01:41 PM) Yes, indeed. It is hard to say this in a blunt and kind way so please understand I mean this to be kind and helpful, a person who does not understand data flow needs to develop some real applications, have them fall apart, and be forced to solve those problems with minimal outside help. Can you really pass the CLAD and not understand data flow? What kind of test is it? Mike
  21. QUOTE (zmarcoz @ Jan 15 2009, 11:32 AM) You should consider not writing any more LabVIEW code until you understand data flow. If you do not understand this, you will waste great amounts of time, our time and your own until you do. Data flow is the manner by which LabVIEW sequences activities. Data flow is not sequential. Sequential progress is typical for text based programming languages. Test based: Line by Line using conditionals to branch in different ways (IF THEN ELSE) jump to a different section (GO TO) and so on. Generally, a sequential logic will execute in the same order every time the same inputs are applied. Parallel activities can be very complex to execute in sequential code. Data flow: Stated simply, a node executes at the time all its inputs are filled. "Node" means a sub-VI, a loop, a case structure, a primitive function, a library call, etc. - all the things you attach wires to are nodes. A node will NOT execute if its inputs are not ALL filled. A node produces outputs when all the outputs are filled internal to the node. A node will not produce outputs until ALL the internal data flow activities have filled the output terminals. (However, some output terminals may have a default value that is not dependent on data flow - this is particularly pertinent to case structures.) Because of data flow LabVIEW is inherently parallel in its execution. You can create parallel activities with out any effort at all. The best thing for you to do is turn on the highlighting tool and observe how data flows through the VI. (This tool is the incandescent light bulb symbol on the block diagram menu.) Many people understand data flow as it applies to subvi's and functions, but fail to understand that the same principle applies to loops and other structures. A while loop does not end looping immediately when the stop terminal gets its value, it waits until ALL the output terminals have values, then it does not perform any further iterations and the data flows away from the whille loop. If the output terminals do not get values the loop will "hang" or wait indefinitely until the data arrives. So the answer to your question is: Yes, data flow is not only possible, it is mandatory. Data flow is how LabVIEW works, nothing less. Mike
  22. QUOTE (zmarcoz @ Jan 15 2009, 11:32 AM) You should consider not writing any more LabVIEW code until you understand data flow. If you do not understand this, you will waste great amounts of time, our time and your own until you do. Data flow is the manner by which LabVIEW sequences activities. Data flow is not sequential. Sequential progress is typical for text based programming languages. Test based: Line by Line using conditionals to branch in different ways (IF THEN ELSE) jump to a different section (GO TO) and so on. Generally, a sequential logic will execute in the same order every time the same inputs are applied. Parallel activities can be very complex to execute in sequential code. Data flow: Stated simply, a node executes at the time all its inputs are filled. "Node" means a sub-VI, a loop, a case structure, a primitive function, a library call, etc. - all the things you attach wires to are nodes. A node will NOT execute if its inputs are not ALL filled. A node produces outputs when all the outputs are filled internal to the node. A node will not produce outputs until ALL the internal data flow activities have filled the output terminals. (However, some output terminals may have a default value that is not dependent on data flow - this is particularly pertinent to case structures.) Because of data flow LabVIEW is inherently parallel in its execution. You can create parallel activities with out any effort at all. The best thing for you to do is turn on the highlighting tool and observe how data flows through the VI. (This tool is the incandescent light bulb symbol on the block diagram menu.) Many people understand data flow as it applies to subvi's and functions, but fail to understand that the same principle applies to loops and other structures. A while loop does not end looping immediately when the stop terminal gets its value, it waits until ALL the output terminals have values, then it does not perform any further iterations and the data flows away from the whille loop. If the output terminals do not get values the loop will "hang" or wait indefinitely until the data arrives. So the answer to your question is: Yes, data flow is not only possible, it is mandatory. Data flow is how LabVIEW works, nothing less. Mike
  23. QUOTE (zmarcoz @ Jan 15 2009, 11:32 AM) Yes. You connect the case structures with a wire and data flow is the means by which the vi is sequenced. Sort of like gravity - you have no choice at all - data flow happens. I have to ask, what do you think the definition of data flow is? Mike
  24. QUOTE (zmarcoz @ Jan 15 2009, 11:32 AM) Yes. You connect the case structures with a wire and data flow is the means by which the vi is sequenced. Sort of like gravity - you have no choice at all - data flow happens. I have to ask, what do you think the definition of data flow is? Mike
  25. QUOTE (Ernest Galbrun @ Dec 18 2008, 02:47 PM) To change in what way and for what purpose?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.