Jump to content

Bryan

Members
  • Posts

    381
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by Bryan

  1. No problem on the confusion. I'm more familiar with the lower-level LabVIEW programming because when I first started using LabVIEW, there wasn't a whole lot of higher level things for me to play with, and the ones that were there lacked some desired functionality, so I ended up in "trial-by-fire" situations where I HAD to learn the lower level stuff. Sorry I couldn't be much help.
  2. Well, there is a write file command in the example I provided, but it's not one of the higher level ones like "Write Characters to File.vi". That function opens, writes to, and closes a file each time it's run. My example image illustrates opening a file, keeping it running while you want to write to it, and closing it when you're done. This method is faster and uses less redundant actions that will take up processor time. You could use the "Write Characters to File.vi" if processing time and speed are not an issue. At what time iterations are you wanting to log data? I'm getting kind of lost as to what exactly you're wanting to do.
  3. I've saved data on NT machines in CSV format before and never had a problem. Windows 2000 is NT based, so I don't understand why it would do it for one and not the other. Now, the file system structure might be different. Did you put your path for your new file in a constant in your diagram, or are you prompting for it, or specifying it at run time? Would you be able to provide a screen shot of your code so that we can look at it for you?
  4. What I always used to do was to store my collected data in a 2D array (if there wasn't a LOT of it) and then use the "array to spreadsheet string" function to convert it to a string and then write it to a file. If it's tab delimited, you can call it a ".xls" file and Excel will open it, or call it txt, etc. I've attached an example using a "Get Date/Time String" and 3 random number generators stored into a 1D array to kind of give you an example. I hope it helps. If you're collecting data faster than once per second, you might want to use the "Get date/time in seconds" function and format it to your liking. That function will give you a timestamp with millesecond precision. Edit: I noticed you're running LabVIEW 7.0, so I included a vi for you. The difference between the vi and the image, I included an indicator for loop iterations, and removed the file path constant, so it will prompt you to save it somewhere. Hope it helps!
  5. What function(s) are you using for writing the data to file? If you're using one of the high-level VIs such as "Write Characters to File.vi", that VI opens, writes and closes the file every time you run it. If you're continually saving data in a loop, it's inefficient and unnecessary to keep opening and closing the same file that you're writing to in every iteration. A better way to do it is to use the more advanced file functions. Use "Open/Create/Replace File.vi" to open/create a file refernce outside of your loop and then pass the file reference into the loop. Use the "Write File" function to write your data to the file reference inside the loop. Then after your loop, use the "Close Reference" function to close the file reference when your logging loop has finished running. Note: The Write File function doesn't actually write the data to the file immediately, but merely buffers it until (I believe) either the buffer gets full or you close the reference, you have to use the "Flush File" function to actually write your data to the file, but it may add time to your iterations. I haven't tried it personally, but maybe you can use the "Flush File" VI in a parallel loop to actually write the data to the file reference at timed intervals?
  6. There aren't any text commands in labview for setting control properties. If you right click on a control, and select CREATE >> PROPERTY NODE. You can use the property node to change any property of that control from visibility to value to color, etc. You'll have to "CHANGE ALL TO READ" to set property values. If you're using the property node in the same scope as your control terminal, you don't have to feed it a refnum. If it's in a different scope (i.e. a different subVI) then you will have to pass control reference from the desired control to the subroutine that will change the property. To create a control reference simply right click on the control, and CREATE >> REFERENCE. I've attached an image with both methods using flat sequence nodes just as an example and changing the value of the control. As I said before, there are many other control properties you can control with these methods as well.
  7. Yes, with a state machine, you can keep coming back to your "timer case" so that you don't have to put multiple instances of the timer in between each piece of timed code.
  8. It could be possible if he's using locals or property nodes, that he's resetting the values before his processes have completed execution, so they're reset while they're being written to, but not at the end. I would say to check your execution, ensure that you're resetting the values AFTER your processes have completed. Like I said, it would probably be easy to figure out if we could see the code itself.
  9. Can you provide an image or example of your code for us to see? I have a couple of ideas of what you're doing in my head, but don't know which one you're using.
  10. You 'picky bastard'. I actually don't use notifiers much and normally do the same, stop loops by destroying the reference, but for something I put together quickly for an example I just did it that way. In my actual programming I would have done it much better.
  11. Hello! I think I have an idea about what you're asking, I'll give it a shot and hopefully it's what you're looking for. 1. You are creating a VI that will start a test process and return the results. 2. You are creating a VI used solely for interface functions (display and control). You have to be careful with globals sometimes when it comes to reading and writing to them from different (simultaneous) processes. Globals can become hard to keep track of if you use them liberally. If you have a scenario where you get 2 processes writing to the same global at the same time, you'll end up with a race condition and the value of the global will not be reliable. If you're exclusively reading from a global in one process, and writing to it in the other, then you should be okay. As far as telling the test process to "go", you can use occurrences for that, and then have the process update the global values which you can then read from your user interface routine. If you haven't used occurrences before, check in the 'Examples' provided with LabVIEW 6, they should give you a good idea on how they work. Using queues is another good method for communication between running parallel VIs, but gets a little more involved if you're not familiar with their use. If you need any further clarification, or maybe an example, just let me know. EDIT: I had updated an example LLB, but remembered that you're using 6.0 (I'm using 7 Express). Here's a screen capture of an example I put together for you. I hope it helps. It shows multi-loop control using notifiers. The red sequence node represents your "process" VI and the bottom while loop represents your interface update routine.
  12. If nobody can answer that, can someone at least tell me if and how I can set the sampling rate on this PXI-6509?
  13. I've pretty much given up on triggered digital acquisition due to project time constraints and the fact that it's taking me so long to try to find information on it, so I'm just collecting several points of data, roughly 3-5 full square wave cycles and processing the data afterward. The problem I'm running into though, is that occasionally, in my collected data, I get what appears to be a buffer reset in my data. I'll have a square wave that will have a little blip, or extremely wide "high" cycle amid normal "good" collected cycles. What I think is happening is that the card is missing some of the signal when it resets the read point to the start of the buffer, so the little abnormalities are actually the end of one buffer width spliced to the beginning of the next. Now, with DAQmx, and this card, I am not allowed any buffer control with digital acquisition. I can't even read what the current 'read point' in the buffer is, tell it where to start reading or anything. The only thing it appears I can control is the buffer size, but I'm not sure if it's even doing anything because when I do this, I see no difference in my acquired data. Does anybody know any way to get around this problem, or should I continue to write little digital wave processing subVIs that will find and eliminate the odd-ball wave cycles? Also, just FYI, I'm not concerned about the 'splice' for viewing purposes, I'm doing calculations with the collected data, and inconsistant cycles are affecting the outcome stability of the calculations. Thanks guys!
  14. I'm running a PXI-6713 with DAQmx. I have a 60hz digital signal that I'm monitoring as a reference. What I want to do is: When the 60Hz signal goes high, begin data acquisition at a specified frequency and rate. When I try to set up a VI to montior digital input(s) and set up the trigger/timer, I get an error that reads: "Device property not supported by device or not applicable to this task." That basically tells me that I can't trigger the start of digital data acquisition using DAQmx and a PXI-6713 Card. Seems to me that there has to be a way to do it, anybody know how?
  15. Nevermind guys, I set up a scenario and monitored the PXI controller processor performance using the change state timing VI and found that CPU usage went up to 100% or so. I created my own version of a digital input monitoring VI using the task start and read channels VI with my own 'new value notification" and it used FAR LESS resources. (CPU usage with just my VI alone allowed it to drop to < 8%). I think the DAQmx timer change state function was monitoring the channels at full processor speed (~2GHz) with is MUCH faster than I need for my application. Sorry to end up solving my own problem, but hopefully it will help those who might run into the same situation. Thanks anyways!
  16. Bryan

    certification

    I've wanted to become a LabVIEW certified programmer for at least 3 years now, but I'm tentative to pay the money to become certified because I'm not sure if I have enough knowledge to pass the test. Here's why: I had a LabVIEW basics course in college, where we used LabVIEW 4.0. I then took another LabVIEW Basics I course when I worked for a previous employer, we used 6i for that course I believe. All of the remaining 5 years of experience with labview I have is due to learning as I go. I don't have any other formal education in LabVIEW except for the 2 basics I courses I've taken, but have written labview programs for rs232, 485, GPIB, IMAQ and DAQ applications, and I have no other formal programming education, all self-taught. I consider myself to be a relatively advanced programmer, but am afraid that in my basically self-taught experience with LabVIEW has left holes in my programming knowledge, which will be exposed come test time, then I will have wasted my money taking a certification test that I could potentially fail. I guess I need some level of certainty before I would feel comfortable taking that route.
  17. In my PXI system, I dynamically create a channel/task with all of my lines/ports that are configured as inputs. Using the DAQmx Timing Polymorphic VI (with Change Detection selected) I wire my task and the same physical channels used to create it to the VI. My intent is to monitor all input digital lines and return all of the input values if any line changes state at any time. Now, the VI has a timeout terminal (in seconds) that you can set to -1, which makes the VI wait indefinitely for a change on the input lines. This matches what I want to do, however, when I want to stop monitoring the channels, I have to wait until another state change before the VI will return. I've tried doing like I've done with queues and destroying/clearing the tasks, hoping that it will force the VI to return with an error, but nothing happens. Is there a way that I can abort the VI while it's waiting through the software? I'm currently searching through the DAQmx VI palette for functions that might hold clues. My unfamiliarity with DAQmx isn't helping either. :headbang: One way I can think of to do this is to wire one of my output lines to one of the input lines I'm monitoring, and when I want to abort it, set the output line high, which will generate the change and have the function return. I'd like to avoid tying up any of my digital lines like this if possible. Thanks guys!
  18. EXACTLY what John said. You can also use the "VISA Bytes at Serial Port" VI (basically a property node) to see exactly how many bytes worth of data reside at the port waiting to be read if you're not sure how many bytes to expect. (Located in the Serial, palette). Wire the output of that to your byte count input on your "VISA READ" and it will read in exactly the number of bytes that are waiting. If you're proficient enough at LabVIEW, you can also use state machines, aka "action engines" to do all of your functionality using just one VI. This can get messy though if things aren't kept organized from the beginning. I normally try to do my drivers this way if they're not too involved.
  19. I typed in a whole lot of stuff for you, but in the process thougt of something. You can look at the serial communication examples in LabVIEW (HELP >> EXAMPLES) to get an idea of how to do it. It's relatively easy once you get the hang of it. You'll need to know the port configuration required by the device you're talking to. (i.e. baud rate, parity, stop/data bits, etc) for the configuration.
  20. I didn't know the global channels created can be accessed in MAX Thanks! As for global channels vs tasks, I haven't played enough with them to really be comfortable with global channels. I'm creating my channels and tasks dynamically, and I haven't played with global channels enough to know if I can create them dynamically with as much ease. I'm learning this stuff, slowly but surely (my code is working) but correctly? I'm not sure yet.
  21. Okay, I just got my new "Toy" the other day (PXI system with controller running XP, a PXI-6713 AO and PXI-6509 DIO cards). The DIO card only works with the new DAQ 'stuff', so I am forced to learn DAQmx as opposed to the regular DAQ with which I am more accustomed. Now, in trying to figure out how to use all the stuff, I created a global channel, and now that I wish to delete it, I can't figure out how and have had no luck using LabVIEW help, etc in finding out how to delete it. Can anybody help? Also, if any of you guys have used DAQmx before and have any useful tips, tricks, or "head's-ups" I would appreciate you sharing them with me. I'm still trying to figure all of it out.
  22. Could you be a little more specific? To create a custom control, select it on the front panel of your VI, then from the EDIT dropdown menu, select "Customize Control". This will open the control editor with your selected control on the panel. From there you can make changes to the control. As far as advanced control editing is concerned, I won't be able to help you as I don't do much control customizing... yet.
  23. Oh, and nevermind about my "G" question, I found it in the FAQ.
  24. I think it would be beneficial to have case structures able to select a case based on matching of a regular expression when a string is wired to the switching terminal. Right now, they can accept ranges of values for one case... i.e. "a...c" will accept any character from a-b. It would be neat to be able to type in something like "foo*, *bar" and have it accept things like "foo, foobar, foo bar, lowbar, low bar" etc.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.