Jump to content

Vyresince

Members
  • Posts

    11
  • Joined

  • Last visited

    Never

Vyresince's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Yeah, I know. It's pretty bad technique, but with this particular VI I think the Front Panel size and order takes precedence over that of the Block Diagram.
  2. Thanks for the advice. I ended up just copying the code from the sub VI into the top-VI so I could customize the top-VI's FP a bit more. I know it's a bit clunky, but at least it's working. Thanks again.
  3. Hello all! I've spent my summer working on a research project at my college. Over this time I've modified/created three core VIs (among other things) that we use to analyze our data. The first one frequency counts, the next one shaves this counted data (due to our signal being a bit noisy), and the last one performs an FFT on the data in order to see minute deviations from the fundamental frequency. Well, now that you have a bit of a background on what I'm doing, how about my problem? I wanted to make the analysis process go a bit smoother, and so I took these three core VIs and wrapped them all into one all encompassing VI. The problem I have is that the second VI (the shaving program) has dynamic output as it runs and requires equally dynamic input from the user. The reason for this is that we never really know exactly what bounds we'll want to use for shaving, but we want to get as close as possible too. So I had the VI display each stretch of data as it came up and the user could then put in the optimal bounds. Now, as a stand alone program this works fine, but I'm having problems with it as a sub VI. Since the sub VI only gets the initial input from the parent VI and only displays the final output, I can't make use of the VI's dynamic nature. Now that you have the background and the problem, I can ask my questions. The most obvious of which is can this be done? Can I dynamically update the output in the parent VI, or dynamically provide input? Is the only solution to have the sub VI automatically opened when it runs? Or do you all, who are much wiser than I in the realm of LabVIEW, have other suggestions on how to go about wrapping these programs into one easy-to-use VI? I'll attach the parent VI, as well as the shaving sub VI. Just please keep in mind that I only started using LabVIEW at the beginning of this summer, and so my code/technique is probably pretty sloppy. Thank you all very much in advance. These forums are truly a glorius place for anyone with LabVIEW concerns. Download File:post-5431-1155052667.vi Download File:post-5431-1155052695.vi P.S. I apologize if this isn't the best category for my question. I feel that this is pretty strongly related to user interface, since that's what needs to be dynamically updated, but I could be wrong.
  4. Thank you both very much i2dx and Crelf. I believe all my questions have been answered, and I will definitely take into account noise and the accuracy of the last significant bit. Once again, thanks to everyone who has helped me with my precision woos.
  5. I believe I have the range set on -10/+10 in order to catch the sufficiently common fluctuations beyond -1/+1. I might talk with some of the other people I work with and see if there would be any danger in narrowing this range a bit. I hadn't really thought too much about that. I mainly work on the data analysis, but I can certainly mention your comments to those that actually set up the electrical devices/sensors. I didn't think was losing anything to single precision assignment, but I wasn't sure either. Much thanks for all your help, you've certainly cleared many things up for me. I have another question related to my first (once again, please forgive my ignorance). Were I to upgrade to a 16-bit DAQ, how much would this buy me in terms of precision/significant digits? As I said, right now I get about 6 digits of precision. Would a 16-bit DAQ significantly improve upon this, or is it likely I'll end up just measuring more noise?
  6. I have an NI USB 6009 DAQ that takes voltage data for me. It's the 14-bit version, and I know it has a max sampling rate of 48 kS/s (though I happen to sample much slower than this), but what about the precision of the numbers it gives me? Will it only give me numbers it's sure of or could some of my numbers potentially be garbage? The reason I'm asking this is because I recently had to rewrite a view VI's to work with doubles instead of sinlges due to precision concerns, and from this I started wondering about the source of my raw data. For the most part, my voltage just oscillates from about +1 V to -1V (although it will occasionally exceed these limits). Am I stupid to worry, will the device only give me numbers it knows; or could this be like the situation with single precision where it will give you plenty of numbers but can only be relied upon to give you 6-7 accurate numbers? Just in case it's helpful, here are some examples of voltage readings I'm getting: -1.1792, -0.336914, 1.19385, 0.532227, -0.644531. Thanks in advance for all wisdom imparted and I really do apologize if I'm wasting forum space with elementary questions. EDIT: The numbers I provided came from a VI that logs the DAQ data, and it saves the numbers in single precision (though I may change this in the future), so I don't know if that has an effect on the numbers or not, but I thought I should mention it. Perhaps saving in single causes them to lose precision?
  7. Wow, I actually just got online to refresh my memory on the number of significant digits that single and double can handle and thought I'd check these forums real quick. Thanks much for answering a question I hadn't even thought to ask yet! I was pretty sure that single could handle 6-7, but I couldn't remember if double was 12 or 15-16. Once again, thanks (to Rolf and everyone else who has helped).
  8. Hello all. Sorry it has been so long since I've replied, but my boss has had me out at another location doing work without an internet connection. After that I was off for the 4th of July and have just got back. I did manage to rewrite the read and write VIs to work with doubles and it was a lot easier than I thought it would be. Much of this ease was due to all the advice I've gotten from your replies and I thank you all very much for your input. If I ever manage to scrounge up some money, I will definitely donate to these wonderful forums.
  9. Well, most of my downstream analysis programs assume that the data is in a single precision binary file and not ASCII, so I believe I would prefer to save data in a double precision binary file if such a saving is possible. Your intuition serves you well, because I am indeed looking at data that varies from a larger mean value. I've thought before of subtracting it out to increase precision as you have suggested, but much of the downstream analysis involves taking FFTs of the data and I believe subtracting the mean value out will cause problems with these FFT programs. I actually do use IGOR and it's quite useful. As you know, it can open my delimited text files the way they are without any conversion into binary files, however my other software (mainly LabVIEW programs) work on the assumption that the incoming data is a binary file representing numeric data. What I really would like to know is if it is possible to save data to a binary file with double precision instead of in single precision. Can I just rewrite the LabVIEW provided single precision saving function to save double precision files? I've tried doing that the naive way by just changing the single precision arrays to double precision, but that resulted in garbage results. So if anyone has any suggestions on how to save data to a double precision binary file I would love to hear them.
  10. Sadly, I am a student working with my college's physics deptartment, so I just may need that many digits. However, your advice seems promising and I greatly appreciate it. I think I use the Single Precision Files because those are the kinds of files the other software I use can read... but I suppose I'm just going to have to figure out a way around that or give up on so many digits. Much thanks to all those that replied. Well, as I suspected, didierj was absolutely correct. It is not the string to number conversion function that is causing the problem, it is this practice of saving data with SGL files. However, I do not see a readily accessible way of saving files with double precision. I'll be on the lookout and working on a method of saving double precision files, but I thought I would go ahead and post to see if anyone already knows a convenient way of saving such files. As always, much thanks for all advice.
  11. I've searched through the forums and LabVIEW's built-in help index, but I have failed to find the solution to what I think is a ridiculous problem. Here's the problem: I have a GPIB VI that controls an Agilent Frequency counter. The VI simply initializes the counter and records the results computed by the counter. Due to speed concerns, this VI records the results in String format and does not parse them into numerical values. That's where the problem comes in and the problem actually has nothing to do with this VI. The problem is with another VI I wrote to convert the files full of strings into files containing numerical representations of the data so I can work on the data with LabVIEW and other software. However, when I use the Fract/Exp String to Number function it converts a string such as "+5.748901924922E+002" into the number 574.89 (according to probes and my attempts to maniputlate the data with other software such as IGOR). Where has all my precision gone? When I have the result go to an indictor and set the indictor up with the appropriate precision, I can see all the numbers just as they should be, but all my probes only show 2 digits of precision and when I save the results to an SGL file I also get only 2 or 3 digits of precision. Is there any way to remedy this other than making my own version of the Fract/Exp String to Number function? The version of LabVIEW I'm using is 6i and I'm running it on a Mac. Thank you in advance for any and all advice, suggestions, or helpful links. EDIT: Well, I've found out that I actually do have more precision in my results than I orginally thought; however, there is still a problem which may actually be the expected result of computer arithmetic. The conversion function will convert a string such as "+5.748901924922E+002" into the number 574.8901977539. Is this just a result of the inherent lack of extreme percision in floating point numbers represented by a computer? I know the amount of precision one can achive is limited, but I didn't think this number would reach that limit, though I could very well be wrong. Once again, thanks in advance for any and all assistance.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.