Jump to content

torekp

Members
  • Posts

    216
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by torekp

  1. QUOTE(crelf @ Jun 7 2007, 08:30 PM) That's me. I'm back, for more trouble. QUOTE(lavezza @ Jun 8 2007, 04:02 AM) Some coworkers and I went to the Developer Education Day here in Phoenix. I didn't stay for the "Performance Optimization for Embedded LabVIEW Applications" discussion, but they did say that required inputs can reduce memory allocations. [...] Exactly what I was told. QUOTE(yen @ Jun 8 2007, 08:43 AM) Indeed. See Greg's (!) explanations in http://forums.ni.com/ni/board/message?board.id=170&message.id=191622#M191622' target="_blank">this thread. Now you've got me curious how the required-terminal and inside/outside-of-structure variations interact. Can you get a good performance with a terminal-inside-the-structure as long as it's a required terminal on the connector pane? Or do you have to do it right both ways? At this rate, I'll need to take about 2 weeks to go back and fix all my subvi's.
  2. I'm getting error 1401 when trying to read datalogs that I created in 8.20. These datalogs were fine until I upgraded to 8.2.1 and mass-compiled. "Error 1401 occurred at Read Datalog in SG.lvlib:toplevel_read_datalog.vi Possible reason(s): LabVIEW: Attempted to read flattened data of a LabVIEW class. The version of the class currently in memory is older than the version of the data. You must find a newer version of the class to load this data." Note, I am always running my VIs from the development environment, both when creating datalogs and when reading them. When I create new datalogs I can read them; it's just the older datalogs that Labview thinks contain a newer version of the class. :headbang: Any workarounds? Can I brute-force an uptick in the version number of my class, by going to its properties and just typing in a new version number, and if so do you think that would help? Should I change the data in my class in some subtle harmless way (I32 to U32?) to cause a version number increase? I'm a total GOOP novice, and I don't want to f mess things up any more than they already are.
  3. QUOTE(Aristos Queue @ Jun 6 2007, 04:31 PM) I thought my problem was caused by upgrading (and specifically, I'm guessing, by some change that happened during Mass Compile). I'm still inclined to think so. I never built an application, I worked from the development environment in 8.20, and again in 8.2.1. But I'll take your advice and move the question to the GOOP forum. Edit: the new thread is http://forums.lavag.org/index.php?showtopic=8397' target="_blank">here.
  4. Sounds great. Your topics are excellent. :thumbup: Topic suggestions: handling of large arrays, long strings, etc. without degrading performance. SubVIs and performance. When to use 2D arrays (performance? memory?) versus when to create an array of cluster of 1D array (ease of programming, often), and similar tips/techniques for managing complex data sets.
  5. I'm getting error 1401 when trying to read datalogs that I created in 8.20. "Error 1401 occurred at Read Datalog in SG.lvlib:toplevel_read_datalog.vi Possible reason(s): LabVIEW: Attempted to read flattened data of a LabVIEW class. The version of the class currently in memory is older than the version of the data. You must find a newer version of the class to load this data." What the ...?
  6. QUOTE(Herbert @ May 17 2007, 05:36 PM) Yeah well, I restrained myself from adding "pants on fire" to "Liar, liar." The thing is, the Labview thread doesn't immediately continue after "shoving your data into the Windows buffer". If it did, the elapsed time from the difference of the two tick counts would match what the Profiler says. Do you know if this cursed behavior is peculiar to Windows? Can Linux do this much better? How about a PXI box?
  7. QUOTE(torekp @ Apr 11 2007, 08:00 PM) Answer: they're about the same. Attached: my testing VI (8.2). I learned something very interesting about Profile Performance and Memory - the time taken to actually write to the files, apparently "doesn't count". According to my tick counts embedded in the code, it took over 2 seconds to write to the files ten times (not counting the initial 5 writes). According to the Profiler, the testing VI used only 500 ms. Liar, liar.
  8. Thanks for the pointer. One thing I find very attractive about this is that it's possible (I imagine - haven't played yet) to overwrite older data segments if the total file size would otherwise be excessive. Then I can decimate my data in-place as needed. With TDMS, I wouldn't know how to go about that.
  9. QUOTE(Gustavo @ Apr 16 2007, 01:32 PM) Here's what I did for a vaguely similar project. Have an XY graph on your front panel. On your block diagram, have an array of clusters. Each cluster contains 2 arrays, an array of X values and one of Y values. Each cluster of 2 arrays will become one line on the graph. You can have several lines be data acquired from hardware if you like, and several lines be user-defined by drawing on the graph as in the Draw Graph with Events example. Every time the users want to create a new "auxillary line" let them push a boolean button on the front panel. In your block diagram you respond by adding a new cluster of 2 empty arrays, which then get populated when users drag/click the mouse. Make sense? If not, post what you've got and I'll try to modify it.
  10. I forgot the part that makes this a "design and architecture" question: Do you think it makes sense to just allow the DAQ output task to continue, and not stop it? That is my plan at this point. I was also thinking of dynamically loading as many subVIs as reasonably possible, so that when the main program is finished and we're just keeping the hardware warm, there's a small and efficient program taking care of it.
  11. I need to produce fresh analog output during each cycle of a DAQ operation, as well as reading analog input. One way to enforce the "fresh output" requirement is to set the DAQmx property Do Not Allow Regeneration - and currently, that's what I'm doing. But that's a mistake. I need to Allow Regeneration. Yet, I still need to monitor the DAQ operations and pop up a message to the user if I fail to provide fresh AO data. How can I monitor the writes and see if they successfully "made it under the wire" before the relevant samples were generated? The reason I need to Allow Reneration is that I want the AO to continue without interruption after the "real" program is stopped. The continuing AO will keep my hardware warmed up, but if the AO is interrupted while the hardware is left on, the hardware could be damaged. In order to reduce the processing load on the computer, there should be no more writing of data after the "real" program is done. Here is the core "write then read" vi, which sits inside my main loop. Bear in mind that before the loop begins, I have written quite a few more samples to AO than I have read from AI, so there is a bit of breathing room: http://forums.lavag.org/index.php?act=attach&type=post&id=5525 The vi in which I initialize the DAQ operations is too ugly to post, but suffice it to say that DAQmx write property is currently being set to Do Not Allow Regeneration. Here's the order of operations to be controlled by my program: 0. Warm-up the equipment by producing analog output (this part is not written yet) - until the user presses start. 1. Another computer, which runs a vision system looking at a belt, tells my computer, "Hey there's an item located at (X1,Y1) and another at (X2, Y2)." 2. Labview constructs about 100kSa of analog output, with the goal of making sure that the mobile sensor will visit (X1, Y1) and (X2, Y2) when the relevant stretch of belt arrives. It writes this into the buffer, and when the DAQ output rolls around to that part of the buffer, everything will be good. That is, as long as the write happened BEFORE it was needed and not after. (The queue, which holds AO data before it goes into the buffer, is probably dispensable - but this is a work in progress.) 3. Simultaneously as the DAQ output handles that part of the buffer, the sensor data from (X1,Y1) and (X2,Y2) and the periods of travel in between, is going into the AI board. 4. Read the AI, and crunch the data - were the items good, or were they bad? Lots of computation involved. 5. Go to step 1, and pray (see step 2, last point) that step 4 isn't taking too much time. 6. When the user presses stop, the analog output should continue seamlessly and ad infinitum - this part is not written yet. My question(s): How can I monitor the writes and see if they successfully "made it under the wire" before the relevant samples were generated? Or am I attacking this from the wrong angle, do you think?
  12. QUOTE(Gustavo @ Apr 13 2007, 12:08 PM) Find the NI example called Draw Graph with Events - what you're doing sounds similar.
  13. torekp

    stepper motor

    bhati, What's your question? I wrote a Labview program once to control a stepper motor via parallel port, but it sounds like you've already got that and want something else. But I'm not sure how "directly" you can control your stepper motor via RS232; I'd think there's got to be more hardware involved, and if you tell us what that is maybe someone here could help.
  14. QUOTE(Ben @ Dec 18 2006, 05:31 PM) Hey Ben, or anyone, does pre-writing of files help with datalogs? I'm pre-writing a ton of data, then setting the # of records = 0. But perhaps this defeats my purpose. Is it possible/better, after pre-writing the phony data, to simply set the file position at 0 records from start? Will the datalog then correctly overwrite the phony data with real data?
  15. You can probably use Match Regular Expression from the string palette to check the user's entry for being numeric. Don't ask me how though, unless you want to keep it really simple. If you only allow digits and the . for decimal point, put [^0-9\.] as your regular expression, and if you find a match, it's non-numeric. If you want to change the table size (i.e. the size of the 2D array of strings) but not the appearance, don't use those properties -- just write an array of blank strings to your table. For example if you leave the front panel size as shown, but write a 10X10 array of blank strings to the table, then the vertical and horizontal scrollbars will no longer be grayed out.
  16. How about a table with columns labeled Register# and Value. As soon as the user enters a new Register#, your VI automatically updates the Value column to that register's current value. If the user tries to enter an already-listed register number, you beep or flash, and move the cursor into the column where that register's value is listed. This idea makes good sense if the number of registers the user wants to change tends to be very small compared to the number of registers. Otherwise I like your idea better.
  17. Thanks guys. I will do some reading as you suggest. I did come up with a solution, which assumes that the distributions are normal distributions. It's kinda ugly but it converges in a reasonable number of iterations nonetheless. If anyone wants to use it, just ask.
  18. Let's say you have two groups of items, "good" ones and "bad" ones, and you perform some test on them and each item gets a score from the test. Let's say a high score indicates good. Unfortunately the test is not perfect: some of the good ones get lower scores than some of the bad ones. Take this XY graph for example, let's say the red ones are the good ones: http://forums.lavag.org/index.php?act=attach&type=post&id=5312 You're going to separate the items into two piles based on their score. Putting a bad item in the "good" pile has a cost of 1, while putting a good item in the "bad" pile has a cost of R (0<R<Inf). Where would you draw the line - and how to program it in Labview? Has anyone here already done this - and if not, would you be interested in receiving my solution, once I get one? So far, I'm planning to pretend (and I do mean pretend) that my distributions are normal distributions, and use the "Normal CDF" (Cumulative Distribution Function) vi from the math palette. On each iteration, I'll take some guesses about where the cutoff should be and calculate the cost of each one using the Normal CDF. And here's the tricky part: I want to start with three+ guesses about where the cutoff will be that straddle the right answer, fit a curve through these points, and use "Brent with Derivatives 1D" optimization. Only, the only way I can think of to guarantee this "straddle" is to test some fairly wild guesses - say, the low mean minus lots of standard dev's, and the high mean plus lots of stdev's. If one of these turns out to be the best initial guess, and better than a slightly less wild guess, then I simply declare it to be the "right" answer and don't do any optimization.
  19. I don't fully understand the options regarding "allowing regeneration" of output data: I've got a process that involves analog output and input in sync. They're externally clocked, and the two PCI cards are connected by RTSI. In each cycle I write 100k Samples to the 720k output buffer and read 100k from the input buffer and crunch that data (not exactly 100k, but never mind that). After data crunch, go back to "write 100k of the output". Right now I have the analog output task regeneration mode set to "Do Not Allow," because I want an error to happen if the data crunch takes too long. I want to write fresh output data for every cycle's worth. I'd like to change to "Allow Regeneration" and still have the program cry foul if the crunch-of-analog-input-data takes too long. Why I would like to Allow Regeneration, you ask? Because after the real process is done, I'd like to seamlessly transition into "warmup" mode where the analog output continues without stopping, but without writing the buffer repeatedly (let's conserve computer resources here). So, how do I Allow Regeneration and still cry foul in the event that my program takes too long to cycle back to the point where it would write some fresh data to the output buffer?
  20. QUOTE(SciWare @ Apr 18 2006, 02:02 AM) Thanks, that's better than what I was doing (I had been stripping in case of .exe, but I hadn't thought of .LLB). On a related note, does anybody here use the EXIT.vi found here to close executables' windows on exit? I don't use it as written (I wrote a partial-copy instead) because it assumes that only the top-level VI should be causing an exit of the executable - I guess that might be good form, but I'd rather be flexible.
  21. Thanks guys! Chris, I almost did write my own Labview backup routine - then slapped myself in the forehead. "There I go again," I said, "reinventing wheels." QUOTE(Mikkel @ Mar 2 2007, 10:03 AM) Got any favorites off the list? I don't understand the point some of these things, for example, application launchers. Is there something wrong with the built-in Windows "start" button? Well, I guess it can be kind of klutzy, but... I just have no clue about how useful most of these things are. Which is why I want to hear y'all's rants and raves.
  22. Today I went looking for a free/cheap backup utility. I found synchredible, which works, and has some nice features (like being free), but doesn't zip (that would be nice). Anyway, I invite your opinions on this and any other super-useful categories. For another example, a while back we praised the virtues of SpaceMonger and SequoiaView, which are disk-usage mappers. So, tell me what else I'm missing; conveniences I never imagined, but soon won't be able to imagine how I lived without!
  23. In which forum / subforum should I ask for advice on, for example, obtaining a free or low-cost backup utility program?
  24. Your wish is granted - sort of. This may be a bit of overkill. Labview 8.0. It accepts an array of strings for "patterns"; it will not accept a single string. You have my blessing to polymorphize it though http://forums.lavag.org/index.php?act=attach&type=post&id=5086
  25. Hey Tom, have you considered the Labview Report Generation alternative? I used to do what you do (sort of), but found the Labview Report to be much easier. I use html style reports; haven't really tried the other style. The resulting files are a lot smaller than a comparable Excel file (if that matters).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.