Jump to content

brian

Members
  • Posts

    171
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by brian

  1. © Copyright © 2013 Brian H. Powell

  2. © Copyright © 2013 Brian H. Powell

  3. © Copyright © 2013 Brian H. Powell

  4. © Copyright © 2013 Brian H. Powell

  5. © Copyright © 2013 Brian H. Powell

  6. © Copyright © 2013 Brian H. Powell

  7. But Relf, what about all those years you double-charged me for the BBQ? And don't you owe Justin and me a round from the New Orleans "planning" meeting we had earlier this year at that bar in the French Quarter?
  8. Justin, I wanted to let you know that Relf is paying for me again this year.
  9. Thanks for the feedback. There are some things I'd like to see improved in the HAL decomposition we've put forth (http://zone.ni.com/devzone/cda/epd/p/id/6307), but some of your feedback is new. At the risk of introducing new terminology, I prefer to talk about a "measurement abstraction layer" and a "hardware abstraction layer". The ASL is closer to the MAL, and the DSSP is closer to the HAL. The MAL can present a high-level measurement--e.g., a "stimulus/response test", or "filter characterization test"--and implement the test strategy--e.g., to use an RFSG/RFSA frequency sweep, or wideband pulse measurement. I would expect both the MAL and HAL layers to be OO. One of the cool things about our approach is that you can choose to simulate (stub/mock) at either level of the abstraction, and it's just a matter of writing another implementation class. If you want to use simulation at the HAL layer, then we already have simulation built into our IVI and modular instrument drivers. It sounds like you want to record real-world measurement data and then play it back. Having a HAL in place will facilitate the recording of the data, since you can connect to your actual test bench and capture the data to a file without touching the rest of your application. Then you can use a different HAL implementation class to play back the data from that file.
  10. That reminds me of a Monty Python scene... http://www.imdb.com/title/tt0071853/quotes?qt=qt0470590 Thanks for posting the announcement. We've got some great stuff lined up. Brian
  11. Outstanding. Scholz' is an Austin original. Unfortunately, I'll be showing up late, or at the after-BBQ party. @crelfpro, I'm pretty sure I prepaid last year. Brian
  12. brian

    IMG_4623.jpg

    What an awesome photo of Preston!
  13. brian

    IMG_4728.jpg

    My presentation of the prototype LabVIEW 2009 t-shirt. We chose a different design for NIWeek attendees. This one has the keyboard shortcuts for LabVIEW, printed upside down so it's easy for the person wearing it to read them. Special thanks to Shelley Gretlein, who was instrumental in providing this shirt for LAVA.
  14. brian

    IMG_4582.jpg

    What a great picture. Clearly, this was closer to the beginning of NIWeek than the end. I don't look exhausted.
  15. QUOTE (TG @ Nov 23 2008, 02:57 PM) To my knowledge, yes. It was in a section of code that only serves what we call "measure data" types. These are waveforms (analog and digital), the time stamp, and the Express Dynamic Data Type (which I call the "DDT" for more than one reason ). The time stamp has nothing to leak--it's just a 64.64 fixed-point value with no strings or arrays to lose track of. All waveforms and DDT should exhibit the problem, though. Brian
  16. Fixed in a future version of LabVIEW whose existence I can neither confirm nor deny. The problem was the "reset to default value" caused by the unwired case structure's output in one case. The reset to default forgot to deallocate whatever was in the waveform before (the array and variant). Wiring in an empty array of waveform constant doesn't relieve the problem; it still looks like a reset to default. However, if you allocate the array to contain one waveform (even if that waveform is empty), we stop leaking memory. Brian
  17. QUOTE (shoneill @ Sep 17 2008, 10:23 AM) What I meant, and I now realize is incorrect, is PowerPC, SPARC, and PA-RISC. All of those are RISC processors, and don't have sine instructions built into them. So at the processor level, I guess we'd have to go back to the MC680x0/6888x processors, which I believe handled this situation better. On the RISC processors, we depended on math libraries to implement the transcendentals. Sun's, based on BSD, was particularly good. HP's was particularly bad, so we used a free math library instead. I don't recall how good Apple's were; I think it depended on the compiler we were using at the time. Brian [Edit: I'll add that we don't use Microsoft's libraries for this level of floating point work, because it doesn't support the IEEE-754 Extended Precision encoding.]
  18. QUOTE (shoneill @ Sep 17 2008, 03:04 AM) I tried 64-bit LabVIEW on Windows Vista, and it yields the same result as 32-bit LabVIEW. I have not tried an AMD processor, but I predict it will match Intel. Brian
  19. We basically pass the number directly to the "fsin" x87 instruction, and depend on it to produce a reasonable result. The Intel processor does not, when the input is large. For example, if I have "1e19" at the top of the floating point stack, and execute the "fsin" instruction, the top of the floating point stack is supposed to change to the result of sin(1e19). Instead, it just leaves the top of the stack alone. LabVIEW reflects the results of the instruction, even though the instruction doesn't do the right thing. Note that Intel documents that the domain of fsin is +/- 2^63. We've known about this for some time, and were unsure whether the performance tradeoff was worth it. (Of checking the domain or range and doing a different algorithm.) We'll look at it again. Note that non-Intel processors generally behave better in this area. Brian
  20. For a limited time, I've posted photos from Tuesday and the Thursday keynote. Brian
  21. I have a project that I keep deferring that says, "remove all the device manager code from LabVIEW". The device manager is the piece that is used by (and only used by) the old LabVIEW 6-era "serpdrv" serial VIs. Val Brown said that you could copy these pieces to LV 7 and they will work, and if you copy them to LV 8, they won't work. I was a little skeptical of this, because I hadn't done anything to actively disable the device manager (yet). But on the other hand, we don't test serpdrv any more, so it's certainly possible for something to go wrong. As we sometimes say, "code rots". Anyway, I decided to copy serpdrv, _sersup.llb and serial.llb from LabVIEW 6.1 and install them in my LabVIEW 8.2 directory. From what I can tell, it is working. I sent "*IDN?\n" to an Agilent 33120A, and got a response back. "Bytes at Serial Port" also seemed to work. I don't condone doing this, but I do think the escape hatch is still in place. I was actually a little disappointed that it worked, because I'd just as soon go ahead and rip out the device manager code if it isn't working. Brian
  22. I was actually working on a blog posting about "serpdrv" when I discovered this thread. It wasn't clear from the original post if the discussion was about the old "serpdrv" VIs (used from LV 2.5 through 6.x) or about the "serial compatibility" VIs (used in LV 7.x and later). They have the same API, but the latter are implemented on top of VISA. To my knowledge, we didn't do anything in 8.x to prevent the "serpdrv" VIs from working, but we haven't tested that scenario in a few years. As I'll mention in my blog, it's inevitable that in the future, we will actively do something to keep "serpdrv" from working. (Basically by ripping out some code from LabVIEW.) I'm not aware of problems with the "Serial Compatibility" VIs in 8.x. I do know that we've had issues in the past with various USB-Serial devices. If you're using a computer with a built-in serial port (read "PC"), you might try that to see if there's a difference in behavior. If you're using a Mac, I'm interested in what you learn, because I'm thinking of connecting my Mac Mini to my stereo receiver and TV through serial cables. I also want to comment on Rolf's message about CPU usage. There's supposed to be code in the VISA driver to prevent it from using so much CPU when it's waiting for the asynchronous I/O. If you're seeing that in the latest driver, I'd be interested in knowing that it's still a problem. Send me the details. Brian Powell LabVIEW R&D
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.