Jump to content

brian

Members
  • Posts

    171
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by brian

  1. I was looking at https://www.ni.com/en-us/events/niconnect.html

    where it says:

     

    Quote

     

    Will there be speaking opportunities at NI Connect?

    There will not be an external “Call for Presentations” for NI Connect Austin.

     

    Elsewhere, it hints that there could be external presentations, but my guess is they'll be industry-focused (and apparently with invited speakers only).

    Anybody heard more about it?  Any thoughts on this?

  2. 2 minutes ago, X___ said:

    XControls are not standard LV objects. Can't put them in array, they have all kinds of limitations, and while I understand that they were created at great cost and with the best intentions

    "at great cost"?  I don't think so.  They are pretty hackish.

    Would you put Channel Wires in the same boat?  They are not intrinsic, built-in objects, either.  Malleable VIs, Classes, Interfaces? Most of the dialogs?  Also built in LabVIEW.

  3. 1 hour ago, X___ said:

    They could spin-off improvements to control/indicator modifications to the community and focus on the things that matter most to their core business.

    XControls, anyone?

    More seriously, Jeff's vision is for more of LabVIEW to be written in LabVIEW, with the intent that it empowers users like us to extend it.

    LabVIEW doesn't have to be open source to do that, and my optimism comes from the possibility that the R&D team is going to have more resources to increase extensibility.

    • Like 1
  4. 50 minutes ago, X___ said:

    Of course there would be a cost to open sourcing the code: cleaning it up and documenting it as the team fixes things up in the upcoming releases is something that would be expected from a pro team anyway.

    Which might be the main impediment to open-sourcing it.  The developers working on both products are proud of their work and would be reticent to release it "as-is".  They'd want to spend the time to make it presentable to people without flashlights.  And NI wouldn't want to spend the money to do it.  For NI to choose the open-source route, I think they'd have to consider it the easy (read "cheap") way out, and this isn't it.

     

    50 minutes ago, X___ said:

    The real obstacle seems to be the "NI culture". I have no idea what it is, but the recent past has painted it has rather quitoxic if not readily toxic. 

    Did you mean quixotic?  I'm not familiar with quitoxic, but it might be a jargon word.

    Regardless, I am familiar with the NI culture and also with toxic cultures, and NI doesn't have a toxic culture.  Believe me on this.  It has a good--if not great--culture, with a few aberrations here and there.  Many of those aberrations have been sacked. ;)  I wouldn't describe it as quixotic, either--I think the NXG decision was reluctantly chosen after years of angst.

    If you would like to hear my stories about a toxic workplace culture, invite me back for a second night at the bar and I'll tell you all about it. 😮 

    • Like 1
  5. On 12/8/2020 at 11:22 PM, Neil Pate said:

    Conversely, it feels like NXG was built by devs not actually intimately familiar with LabVIEW.

    There are some (including me) who believe that more people in LabVIEW R&D (both CurrentGen and NXG) should better understand how LabVIEW is used in the real world.

    Regardless, there is a lot of truthiness to your comment, and the reality is way more complicated to explain until I've had at least a couple of beers in me.

    19 hours ago, X___ said:

    It seems that one way to give a future (or definitely burry any hopes for LabVIEW at the sight of the quagmire) would be to open source it.

    I've thought a lot about this, but I just don't think it would be a successful open source project.  It's not like it's a small library that's easy for someone to understand, much less modify.  You need guides with really strong flashlights to show you the way.  I speak from experience.  When I led the team that created 64-bit LabVIEW 10+ years ago, I had to visit every nook and cranny in the source code, and find the right people with the right flashlights.

    I think a more viable alternative (if NI didn't want to own LabVIEW any more) would be to spin it out as a subsidiary (or maybe non-profit?) along with the people who know the source code.  It wouldn't be super profitable, but might be strong enough to independently support its development.  The main impediment to this happening is that LabVIEW FPGA (and to a lesser extent, LabVIEW Real-Time) are really valuable to NI and probably too intertwined with the rest of LabVIEW for NI to keep FPGA and spin out the rest.

    • Like 1
  6. The other day, I wrote up a lengthy response to a thread about NXG on the LabVIEW Champions Forum.  Fortunately, my computer blue-screened before I could post it--I kind of had the feeling that I was saying too much and it was turning into a "drunk history of NXG" story.  Buy me some drinks at the next in-person NIWeek/CLA/GLA Summit/GDevCon, and I'll tell you what I really think!

    First, I'll say that abandoning NXG was a brave move and I laud NI for making it.  I'm biased; I've been advocating this position for many years, well before I left NI in 2014.  I called it "The Brian Powell Plan".  :) 

    I'm hopeful, but realistic, about LabVIEW's future.  I think it will take a year or more for NI to figure out how to unify the R&D worlds of CurrentGen and NXG--how to modify teams, product planning, code fragments, and everything else.  I believe the CurrentGen team was successful because it was small and people left them alone (for the most part).  Will the "new world without NXG" return us to the days of a giant software team where everyone inside NI has an opinion about every feature and how it's on the hook for creating 20-40% revenue growth?  I sure hope not.  That's what I think will take some time for NI to figure out.

    Will the best of NXG show up easily in CurrentGen?  No!  But I think the CurrentGen LabVIEW R&D team might finally get the resources to improve some of the big architectural challenges inside the codebase.  Also, NI has enormously better product planning capability than they did when I left.

    I am optimistic about LabVIEW's future.

    • Like 2
  7. aHA! I knew you'd try to pull this one! Read THIS! BAM!

    But Relf, what about all those years you double-charged me for the BBQ?

    And don't you owe Justin and me a round from the New Orleans "planning" meeting we had earlier this year at that bar in the French Quarter?

    ;)

  8. Thanks for the feedback. There are some things I'd like to see improved in the HAL decomposition we've put forth (http://zone.ni.com/devzone/cda/epd/p/id/6307), but some of your feedback is new.

    At the risk of introducing new terminology, I prefer to talk about a "measurement abstraction layer" and a "hardware abstraction layer". The ASL is closer to the MAL, and the DSSP is closer to the HAL. The MAL can present a high-level measurement--e.g., a "stimulus/response test", or "filter characterization test"--and implement the test strategy--e.g., to use an RFSG/RFSA frequency sweep, or wideband pulse measurement. I would expect both the MAL and HAL layers to be OO.

    One of the cool things about our approach is that you can choose to simulate (stub/mock) at either level of the abstraction, and it's just a matter of writing another implementation class. If you want to use simulation at the HAL layer, then we already have simulation built into our IVI and modular instrument drivers.

    It sounds like you want to record real-world measurement data and then play it back. Having a HAL in place will facilitate the recording of the data, since you can connect to your actual test bench and capture the data to a file without touching the rest of your application. Then you can use a different HAL implementation class to play back the data from that file.

  9. For those who don't know them, the NI Field Architects are a small and elite group of LabVIEW awesomeness (some might call them oracles), with a *lot* of experience (ie: they're old :P) - check out their new blog here: http://labviewjournal.com/

    That reminds me of a Monty Python scene... http://www.imdb.com/title/tt0071853/quotes?qt=qt0470590

    Thanks for posting the announcement. We've got some great stuff lined up.

    Brian

  10. QUOTE (TG @ Nov 23 2008, 02:57 PM)

    Oh boy good to know this and thanks for reporting!. Is it limited to waveform types only? (hopefully)

    To my knowledge, yes. It was in a section of code that only serves what we call "measure data" types. These are waveforms (analog and digital), the time stamp, and the Express Dynamic Data Type (which I call the "DDT" for more than one reason :P ).

    The time stamp has nothing to leak--it's just a 64.64 fixed-point value with no strings or arrays to lose track of. All waveforms and DDT should exhibit the problem, though.

    Brian

  11. Fixed in a future version of LabVIEW whose existence I can neither confirm nor deny.

    The problem was the "reset to default value" caused by the unwired case structure's output in one case. The reset to default forgot to deallocate whatever was in the waveform before (the array and variant).

    Wiring in an empty array of waveform constant doesn't relieve the problem; it still looks like a reset to default. However, if you allocate the array to contain one waveform (even if that waveform is empty), we stop leaking memory.

    Brian

  12. QUOTE (shoneill @ Sep 17 2008, 10:23 AM)

    Well then which processors were you referring to with "Note that non-Intel processors generally behave better in this area.".

    What I meant, and I now realize is incorrect, is PowerPC, SPARC, and PA-RISC. All of those are RISC processors, and don't have sine instructions built into them.

    So at the processor level, I guess we'd have to go back to the MC680x0/6888x processors, which I believe handled this situation better.

    On the RISC processors, we depended on math libraries to implement the transcendentals. Sun's, based on BSD, was particularly good. HP's was particularly bad, so we used a free math library instead. I don't recall how good Apple's were; I think it depended on the compiler we were using at the time.

    Brian

    [Edit: I'll add that we don't use Microsoft's libraries for this level of floating point work, because it doesn't support the IEEE-754 Extended Precision encoding.]

  13. We basically pass the number directly to the "fsin" x87 instruction, and depend on it to produce a reasonable result.

    The Intel processor does not, when the input is large.

    For example, if I have "1e19" at the top of the floating point stack, and execute the "fsin" instruction, the top of the floating point stack is supposed to change to the result of sin(1e19). Instead, it just leaves the top of the stack alone. LabVIEW reflects the results of the instruction, even though the instruction doesn't do the right thing. Note that Intel documents that the domain of fsin is +/- 2^63.

    We've known about this for some time, and were unsure whether the performance tradeoff was worth it. (Of checking the domain or range and doing a different algorithm.) We'll look at it again.

    Note that non-Intel processors generally behave better in this area.

    Brian

  14. I have a project that I keep deferring that says, "remove all the device manager code from LabVIEW". The device manager is the piece that is used by (and only used by) the old LabVIEW 6-era "serpdrv" serial VIs.

    Val Brown said that you could copy these pieces to LV 7 and they will work, and if you copy them to LV 8, they won't work.

    I was a little skeptical of this, because I hadn't done anything to actively disable the device manager (yet). But on the other hand, we don't test serpdrv any more, so it's certainly possible for something to go wrong. As we sometimes say, "code rots".

    Anyway, I decided to copy serpdrv, _sersup.llb and serial.llb from LabVIEW 6.1 and install them in my LabVIEW 8.2 directory. From what I can tell, it is working. I sent "*IDN?\n" to an Agilent 33120A, and got a response back. "Bytes at Serial Port" also seemed to work.

    I don't condone doing this, but I do think the escape hatch is still in place. I was actually a little disappointed that it worked, because I'd just as soon go ahead and rip out the device manager code if it isn't working.

    Brian

  15. I was actually working on a blog posting about "serpdrv" when I discovered this thread.

    It wasn't clear from the original post if the discussion was about the old "serpdrv" VIs (used from LV 2.5 through 6.x) or about the "serial compatibility" VIs (used in LV 7.x and later). They have the same API, but the latter are implemented on top of VISA.

    To my knowledge, we didn't do anything in 8.x to prevent the "serpdrv" VIs from working, but we haven't tested that scenario in a few years. As I'll mention in my blog, it's inevitable that in the future, we will actively do something to keep "serpdrv" from working. (Basically by ripping out some code from LabVIEW.)

    I'm not aware of problems with the "Serial Compatibility" VIs in 8.x. I do know that we've had issues in the past with various USB-Serial devices. If you're using a computer with a built-in serial port (read "PC"), you might try that to see if there's a difference in behavior. If you're using a Mac, I'm interested in what you learn, because I'm thinking of connecting my Mac Mini to my stereo receiver and TV through serial cables.

    I also want to comment on Rolf's message about CPU usage. There's supposed to be code in the VISA driver to prevent it from using so much CPU when it's waiting for the asynchronous I/O. If you're seeing that in the latest driver, I'd be interested in knowing that it's still a problem. Send me the details.

    Brian Powell

    LabVIEW R&D

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.