Jump to content

Supply Chain Risk Mitigation


Recommended Posts

As I'm sure many of you know, there's an issue sourcing any NI products with FPGAs on them. Lead times are... out there. For anyone who can't tolerate those long lead times, they are probably thinking about a plan B. I'm wondering if anyone has gone through the process of designing a replacement for an NI product. Our application is written in LabVIEW and one of our biggest risks is that the run time engine isn't open source. There's so many test hours behind our labview app but if we run into a "bug" and NI won't support it because it's on 3rd party hardware, we could really find ourselves in a bind. How much did you use the LabVIEW code or did you just start from scratch? What's the process like? Expensive? Buggy?

Link to comment

I can't help you with this. We have created many cRIO and some sbRIO systems in LabVIEW and while we see the supply chain disruption too that makes getting the correct systems getting shipped in time a real problem, we have not yet considered redesigning any of them without LabVIEW. If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way. The design documents are likely the only thing that will be really helpful, which is one reason to actually do them and not just trust into "LabVIEW code is self documenting". It seldom is when you look at it a year or more later unless it is very trivial code and for FPGA code it really never is trivial but there are many involved code segments used typically.

Even the realtime part would need to be rebuild with something else as interfacing LabVIEW to 3rd-party FPGA designs is not easy. You would at least need to replace the entire cRIO shared library with something of your own that interfaces to whatever FPGA architecture you are using.

Edited by Rolf Kalbermatter
Link to comment

Many moons ago we were using the NI 32 channel digital input and output cards (now no longer available). These boards could sink a couple of amps per channel and we needed at least 1.5 amps per channel. There were no other cards on the market that could sink amps (mostly a few hundred milliamps) and the lead time became 4 months. We needed 128 digital outputs and 64 digital inputs (6 cards per machine).

Talking with the Electrical guy on the project he said we needed the amps so that we could activate 24v control signals directly without buying relays to intermediate (we would have needed 128 relays at about $8 a pop adding ~ $1000 unbudgeted per machine as well as the headaches of trying to mount them) and the cards were only about $300. He said he could do better in 3 weeks :D

So he designed a 64 channel card with 32 digital in and 32 digital out that could sink 3 amps per channel. He designed each with a serial port (the NI originals were PCI cards) but I wasn't happy with that and made him put in RS485 (today I would have gone with ethernet).  So we ended up with 32 channel input, 32 channel output with 1Mbit RS485 comms, able to sink 3 amps per channel. The BOM for the cards cost $30 and we got them built for $100 as prototypes-no doubt that cost would come down if we had gone into production (the electrical guy said that mass production cost would have been about $90 per card, all in). So now we only needed 4 cards and had oodles of digital inputs spare at 60% the cost of the NI cards, although we did need an NI RS485 card so it broke about even. Need more IO? Just stick another card on the bus :D.

I wrote the firmware for the cards and a LabVIEW driver because, of course, the NI drivers were for PCI, not RS485, but there were drop-in replacement VI's and a couple of VI's that had more features that I'd always wished the NI drivers had had.

FPGA's are probably a different kettle of fish, however, due to throughput But there are advantages of doing your own thing. The one thing I did learn from the exercise was, if possible, don't put cards in the computer. From that time forward I always looked for ethernet, RS495, or Profibus IO (preferably SCPI instruments where applicable) so that different manufacturers hardware could be easily swapped out-software changes because of hardware changes don't scare me. It also means you don't need the headaches that come with PCI expanders and large industrial computers as well as, pretty much care-free, mounting and wire routing with all the associated connector stress problems.

Edited by ShaunR
  • Like 1
Link to comment

Yeah we ordered a PCIe FPGA card last August.  We just got it last week.  Our purchasing team kept emailing us every month asking internally if they could close the account.  We'd say no, then tell them to ask next month. Glad it arrived, and glad it wasn't for anything system critical.  This is mostly a pet project, and could probably have been done with an embedded microcontroller in C.  But we did already have the code in LabVIEW, and maybe it would have taken a few weeks to re-write and test it in another language.  Management involved is very aware of supply issues and didn't push the issue much.  I did reach out to NI 4 times asking for a status update, and never once heard back.

Link to comment
On 4/9/2022 at 2:56 AM, Rolf Kalbermatter said:

I can't help you with this. We have created many cRIO and some sbRIO systems in LabVIEW and while we see the supply chain disruption too that makes getting the correct systems getting shipped in time a real problem, we have not yet considered redesigning any of them without LabVIEW. If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way. The design documents are likely the only thing that will be really helpful, which is one reason to actually do them and not just trust into "LabVIEW code is self documenting". It seldom is when you look at it a year or more later unless it is very trivial code and for FPGA code it really never is trivial but there are many involved code segments used typically.

Even the realtime part would need to be rebuild with something else as interfacing LabVIEW to 3rd-party FPGA designs is not easy. You would at least need to replace the entire cRIO shared library with something of your own that interfaces to whatever FPGA architecture you are using.

Regarding the supply chain, we found and ordered ~50 boards that are very similar to a 9651 and https://krtkl.com/snickerdoodle/

The FPGA interface doesn't look overly complicated. We can get most of that from Xilinx/Vivado since both the ARM and FPGA are on the ZYNQ. Maybe I'll start a new thread to see if anyone has more experience in Vivado. They were a little spammy but maybe the mangotree folk could point me in the right direction. It seems like a bit more of a legitimate path since NI has the VHDL export tool. @CJC IN six person@MT_Andy

Regarding, "If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way" that's rough to hear. If we could keep the LabVIEW code that would

  • Make it so that there's a chance we move back to the sbRIO after the supply chain straightens out
  • Helps sell NI as a good product for R&D and prototyping as the code can move over more easily to the "final" product
Link to comment
5 hours ago, hooovahh said:

Yeah we ordered a PCIe FPGA card last August.  We just got it last week.  Our purchasing team kept emailing us every month asking internally if they could close the account.  We'd say no, then tell them to ask next month. Glad it arrived, and glad it wasn't for anything system critical.  This is mostly a pet project, and could probably have been done with an embedded microcontroller in C.  But we did already have the code in LabVIEW, and maybe it would have taken a few weeks to re-write and test it in another language.  Management involved is very aware of supply issues and didn't push the issue much.  I did reach out to NI 4 times asking for a status update, and never once heard back.

Unfortunately, we tied our cart to the sbRIO. It's the controller for our flagship product and there would be a serious hit to revenue without it.

Link to comment
  • 2 weeks later...
On 4/9/2022 at 7:24 AM, ShaunR said:

[...]

So he designed a 64 channel card with 32 digital in and 32 digital out that could sink 3 amps per channel. [...]

I worked with an engineer named "Glen".  Glen was the inventor of GXI, or "Glen's Expense-able Interface" with a small selection of I/O devices to choose from.  It sounds similar to what your guy did, but for  a different reason.  None of his projects came with a budget for test equipment!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.