Jump to content

Gary Rubin

Members
  • Posts

    633
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by Gary Rubin

  1. I've heard it said that left-handed people tend to be more analytical. Based on that, I wonder how the LAVA community's left-handedness compares to that of the general population. According to the Wikipedia entry on handedness (linked in the original post), 8-15% of the population is left-handed.

  2. QUOTE(yen @ Aug 19 2007, 06:32 AM)

    Try writing anything in the string control (e.g. The quick brown fox jumps over the lazy dog) and erasing your mistakes, all by using the left side of the keyboard (use Caps Lock to toggle the right side) and tell the world what you think. Is this any good?

    I'm still trying to decide whether it makes more sense to my brain to have mirrored the keyboard vs. shifting it (i.e. H maps to G vs. H maps to A).

  3. QUOTE(skof @ Aug 9 2007, 09:44 AM)

    Oh, this is a philosophic question. ;)

    I suppose that "Major Defect" is a defect what affects an application functionality or interface the way it is undesirable to a customer. :rolleyes:

    Maybe this a reflection of my ignorance of Software Engineering principles, but isn't that more a function of the programmer than the programming environment?

    Edit: Nevermind - misconstrued the original post; he's not asking what the Defect Density is in Labview code, but how to measure it.

  4. QUOTE(Kevin P @ Aug 1 2007, 11:48 AM)

    I think the very first reply from Mikael gives the answer. The explanation *why* is that the Timestamp function used in the original posted code only has a time resolution of 15.6 msec. It's simply quantization error. One call occurs at, say, (X).9998 quanta and the next call occurs at (X+1).0003 quanta. The reported time difference isn't because the execution actually took longer, it's because the measurement got quantized.

    The Labview Profiler usually has maximum execution times of 15ms, even for very fast VIs. I assume these things are related.

  5. QUOTE(Ben @ Jul 31 2007, 07:29 PM)

    Again with the virus checking already! :headbang:

    I do not understand all of the details but virus checkers have their hooks into everything and when they kick in, they seem to go into kernal mode and do not allow other threads to run until they have handled this "emergency situation", like opening a file in Notepad or browsing to a new folder.

    I swear I used to be able to get my PC performance to flat-line priopior to performance profiling, but now, since virus checking, I have been stymied to get the PC to just do nothing.

    :headbang:

    Ben

    No virus checking on this machine - it's a stand-alone system with no external connections, so we didn't install any antivirus...

  6. QUOTE(Ben @ Jul 31 2007, 09:58 AM)

    1) Set Windows to optimize background services.

    Windows by default will attempt to optimize its scheduling to make foreground processes perform well. For single threaded applications this is fine but for LabVIEW, some of its background threads can suffer.

    Ben,

    Thank you for reminding me of this. I wonder if this explains why my processing seems to bog down, yet my Core 2 processor never shows more than about 60% usage... Certainly something to try.

    Gary

  7. QUOTE(crelf @ Jul 20 2007, 07:51 AM)

    I'd like to, but I don't have the appropriate versions installed - anyone else care ro help out?

    Again, it takes ages to load unless you overtly link the dll to the location, so please don't just put "sensor.dll" in the "Library name or path", put the whole path in there too. Also, it doesn't seem to work with LabVIEW 8.20 - I get sound, but it doesn't change with the tilt in any direction...

    When I open it on mine, the full path is there.

  8. QUOTE(crelf @ Jul 19 2007, 03:04 PM)

    I'm thinking of something like turning your laptop into an etch-a-sketch... :D

    I'm not at all familiar with the Mindstorms capabilities... Could you use the laptop as a remote control? Tilt it right, and the robot turns right, tilt it forward and it accelerates, etc.?

  9. QUOTE(crelf @ Jul 19 2007, 02:48 PM)

    Ooooo that's so cool! I've got real work to do, but I might be able to squeeze this one in jusy for its coolness factor ;)

    I've come to expect grand things from you, Chris. When will we see a Code Repository entry for wiring Labview diagrams by changing the orientation of your lap? :P

  10. I just discovered that my IBM Thinkpad has a 2-axis tilt sensor and probably a 3-axis accelerometer in it. It's part of something called "ThinkVantage", and is used primarily to disable the hard drive when motion gets too extreme.

    Has anyone tried to access this information? Sounds like it would be fun to play with.

    EDIT: Just found this: http://www.stanford.edu/~bsuter/thinkpad-accelerometer/.

    Too bad I've got real work to do...

    Gary

  11. Thanks for the ideas.

    QUOTE(Kevin P @ Jul 18 2007, 09:10 AM)

    I'm doing that. Everything is preallocated (as much as possible).

    QUOTE(Kevin P @ Jul 18 2007, 09:10 AM)

    - Processing routines that generate a buffer allocation dot are made into subvi's. The output data requiring a buffer allocation becomes a candidate for a Unitialized Shift Register (USR).

    - I either initialize this USR array once using the "First Call?" primitive, or I turn the subvi into a small Action Engine with explicit cases for "Initialize" and "Process Data". It depends whether I can live with the memory allocation delay on the first call or not.

    A simple example (but maybe not a good example) of something that always generates an array allocation is an array subset or index array call.

    Are you suggesting replacing method 1 (below) with method 2?

    http://forums.lavag.org/index.php?act=attach&type=post&id=6394

  12. QUOTE(Ben @ Jul 17 2007, 01:14 PM)

    For RT projects that had a lot of data and a bunch of analysis, I set up an Action Engine that handled everything to do with the data.

    The AE had actions to

    "Read form I/O" - This just read the hardware and put the results in SR's.

    "Analyze" to crunch the numbers and determine the results

    "Post" to make the result available were required.

    This let me work completely in-place and left the rest of memory for me to buffer the results.

    My "datapoints" are basically records consisting of 28 fields. I keep a history of 20k, so that's one 560k-element array. The datapoints are placed (by reference) into 2000 different "bins", with each bin having a depth of 64, so that's a 128k-element array. Figuring out which bin to put each new datapoint in, as well as other bookkeeping uses linked lists, so tack on a few more 20k and 2k-element vectors. Several different subVI's need access to those two big arrays in order to do their thing. I found that scaling by bin depth from 200 to 64 had a pretty significant impact on execution speed, even though calculations/loop lengths didn't change; this is why I came to the conclusion that my bottleneck is memory related, rather than computational.

    I've managed to get rid of most of the buffer allocation dots. I am using the Quotient/Remainder operator on an array and am only using the remainder output. I noticed that there's a buffer allocation dot on the unwired IQ output. I thought that Labview knows whether an output is wired, and doesn't allocate space for that. Am I mistaken about that?

  13. QUOTE(Ben @ Jul 17 2007, 12:46 PM)

    The execution threads are all about the process the OS caries out to decide "what value do I set the program counter to next?"

    It has nothing to do with if data gets copied or buffers get re-used.

    Thanks Ben,

    That's what I suspected, but every time I think I understand something about the rules for memory reuse, I find that it's more complicated than I expected.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.