Hey everyone! It's Alejandro from the keynote demo. Wanted to shed some light on Nigel and gather feedback from this important forum (my first post!).
A little about myself: Stepped foot at NI in 2007 and had the honor to work on the LabVIEW language design and compiler teams alongside all the incredible folks you already know. In fact, Aristos Queue was once my mentor! Deeply fond of LabVIEW and all its users.
Back to AI --
What types of day-to-day tasks would you want Nigel to help you with?
And yes, I intend Nigel to help you with your hardware and codebases that aren't the latest & greatest.
It types slowly because we are using GPT 4.0 under the hood, and the demo was live on stage. GPT 4.0 is about that fast, and it varies throughout the day; at night, it seems twice as fast.
This presents a challenge that we are working on; we have lots of ideas to make the generation speed a non-factor. Nonetheless, I'm hopeful that GPT 4.0 will get faster over time, as Open AI fully transitions it out of beta, and as they buy more GPUs. It is possible to retrain Nigel with GPT 3.5 (what ChatGPT uses), but it does not perform as well.
Nigel is capable of using structures, documenting VIs, and creating subVIs, but we did not show that on stage to keep the keynote short & sweet and focused on the core message of spec-to-test (going from a spec sheet to a working hardware test in seconds or minutes). Nigel can do much more than what we showed.
Would you share a device you'd like Nigel to create a driver for automatically? That would be a wonderful test for us.