Jump to content

infinitenothing

Members
  • Posts

    362
  • Joined

  • Last visited

  • Days Won

    15

Posts posted by infinitenothing

  1. I've never used the openg zip library but most of the packages I've used are static linked and just get uploaded/compiled as needed.

    FYI, the 9627 doesn't have enough disk space with the system image to use the RAD/system imaging tools so you won't be able to replicate it. I'm stuck with 2019 until all our 9627 targets are out of support.

  2. On 11/10/2022 at 11:26 AM, Lipko said:

    Yup, I mentioned the reference thing too. With reference array you of course have to do explicit array indexing and wire the reference all around, I don't see much improvment. It's just personal preference 

    The advantage is that indexing an array of refnums does not change the internal semi-hidden active plot state of the control.

  3. 2 hours ago, Lipko said:

    I don't see how should Labview know which plot you intend to work on and what different method there could be. Sure, the above example should be easy to figure out but it's easy to come up with funkier situations. Maybe each plot should have its own reference, but I don't see that a superior solution. Or maybe choosing the active plot should be forced (invoke node instead of property node)? That's not too sympathetic either.

    This pattern is far better than having an array of sub-property clusters and to manipulate those arrays. Though I agree that all these types of properties should have array version too, like graph cursors and annotations for example.

    I often do ugly hacks with graphs and this problem never really got me. The code in the original post (I don't understand as it has some new blocks I don't know) seems like the model-view-controller is not separated enough.

    I think either of those ideas would be superior. The graph should have a plots property that returns an array of plot references. We see this architecture with things like tabs having an array of page references. An invoke node that didn't force you to do a write when you only want to do a read would also avoid this problem.

     

    38 minutes ago, drjdpowell said:

    Does a Property Node, with multiple Properties set, execute as a single action, without a parallel Property Node executing in the middle?   If so, then resetting the Active Plot in the second Property Node in the bottom loop would prevent any race condition.

    Even if it was a single action, is there a promise to maintain that in future versions? I solved this issue by wrapping the graph reference in a DVR.

  4. 10 hours ago, ShaunR said:

    That is a typical symptom of a race condition.

    I would suggest taking a look at "LabVIEW <version>\examples\lvoop\SingletonPattern". It is much simpler and easier to understand than GOOP examples which have a lot of infrastructure included.

    I couldn't actually find that example there. Is there a package I need? Or maybe it was moved or renamed in a different version of LV

  5. 10 hours ago, ShaunR said:

    That's not good at all. With some fiddling you might get 4-5Gb/s but that's what you expect from a low-end laptop. Are you sure it's Gb/s and not GB/s? You are hoping for at least 20Gb/s+.

    I tested two other computers. Interestingly, I found out on those computers, the consumer looking for the end condition couldn't keep up. I would think a U8 comparison would be reasonably speedy. But, once I stopped checking the whole array, I could get 11Gbps. The video was pretty useless as the mfg doesn't have recommended settings as far as I know. I don't know if I have the patience to fine tune it on my own.

     

    6 hours ago, Phillip Brooks said:

    Parallel helps. I don't understand why but the improvement is a few times faster than you'd expect by multiplying the single worker rate by the number of workers. I'm more interested in single connections at this time though.

  6. I got similar performance from iperf. My question then is what knobs do I have to tweak to get closer to 10Gbps?

    I attached my benchmark code if anyone is curious about the 100% CPU. I see that on the server which is receiving the data, not the client which is sending. The client's busiest logical processor is at 35% CPU. The server still has 1 logical processor at 100% use and as far as I can tell, its all used in the TCP Read primitive

    tcp bandwidth test Folder.zip

  7. Has anyone done a bandwidth test to see how much data they can push through a 10GbE connection? I'm currently seeing ~2Gbps. One logical processor is at 100%. I could try to push more but I'm wondering what other people have seen out there. I'm using a packet structure similar to STM. I bet jumbo frames would help.

    Processor on PC that transmits the data: Intel(R) Xeon(R) CPU E3-1515M v5 @ 2.80GHz, 2808 Mhz, 4 Core(s), 8 Logical Processor(s)   

  8. FYI, in my particular case there's no UI. It's all headless RT code targeting a ZYNQ [sb,c]RIO. There's a few TCP loops, some logic for homing motors, some RS232 commands out, a bit of data processing/analysis, etc. I can't go out to a dll because my main roadblock is the run time engine. If I can get the run time engine working then I'll just keep everything as LabVIEW.

    Speaking of data flow, do you think some languages are better targets than others? The options I'm considering Java and Python mostly because I want memory management to stay out of my way as much as possible.

    Re: black pill. There are many other things I'd prefer to spend my time on.

  9. Has anyone gone through the experience of rewriting your LabVIEW code into a different programming language? I'm wondering if it was a total rewrite or if you went line by line translating it to a new language? After the effort was over, was the end result still buggy? Did it take it a while to get it back to its former reliability?

    For people that haven't gone through that—what's your game plan if the time comes when you have to move your code over?

    Related thread explaining some of the context of this question:

     

  10. 5 hours ago, hooovahh said:

    Yeah we ordered a PCIe FPGA card last August.  We just got it last week.  Our purchasing team kept emailing us every month asking internally if they could close the account.  We'd say no, then tell them to ask next month. Glad it arrived, and glad it wasn't for anything system critical.  This is mostly a pet project, and could probably have been done with an embedded microcontroller in C.  But we did already have the code in LabVIEW, and maybe it would have taken a few weeks to re-write and test it in another language.  Management involved is very aware of supply issues and didn't push the issue much.  I did reach out to NI 4 times asking for a status update, and never once heard back.

    Unfortunately, we tied our cart to the sbRIO. It's the controller for our flagship product and there would be a serious hit to revenue without it.

  11. On 4/9/2022 at 2:56 AM, Rolf Kalbermatter said:

    I can't help you with this. We have created many cRIO and some sbRIO systems in LabVIEW and while we see the supply chain disruption too that makes getting the correct systems getting shipped in time a real problem, we have not yet considered redesigning any of them without LabVIEW. If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way. The design documents are likely the only thing that will be really helpful, which is one reason to actually do them and not just trust into "LabVIEW code is self documenting". It seldom is when you look at it a year or more later unless it is very trivial code and for FPGA code it really never is trivial but there are many involved code segments used typically.

    Even the realtime part would need to be rebuild with something else as interfacing LabVIEW to 3rd-party FPGA designs is not easy. You would at least need to replace the entire cRIO shared library with something of your own that interfaces to whatever FPGA architecture you are using.

    Regarding the supply chain, we found and ordered ~50 boards that are very similar to a 9651 and https://krtkl.com/snickerdoodle/

    The FPGA interface doesn't look overly complicated. We can get most of that from Xilinx/Vivado since both the ARM and FPGA are on the ZYNQ. Maybe I'll start a new thread to see if anyone has more experience in Vivado. They were a little spammy but maybe the mangotree folk could point me in the right direction. It seems like a bit more of a legitimate path since NI has the VHDL export tool. @CJC IN six person@MT_Andy

    Regarding, "If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way" that's rough to hear. If we could keep the LabVIEW code that would

    • Make it so that there's a chance we move back to the sbRIO after the supply chain straightens out
    • Helps sell NI as a good product for R&D and prototyping as the code can move over more easily to the "final" product
  12. As I'm sure many of you know, there's an issue sourcing any NI products with FPGAs on them. Lead times are... out there. For anyone who can't tolerate those long lead times, they are probably thinking about a plan B. I'm wondering if anyone has gone through the process of designing a replacement for an NI product. Our application is written in LabVIEW and one of our biggest risks is that the run time engine isn't open source. There's so many test hours behind our labview app but if we run into a "bug" and NI won't support it because it's on 3rd party hardware, we could really find ourselves in a bind. How much did you use the LabVIEW code or did you just start from scratch? What's the process like? Expensive? Buggy?

  13. Yes, mostly just post processing.

    Comparitors and other analog solutions are clever but I'm expecting a use case of needing more than one comparitor each with different user defined thresholds. I also have a use case of calculating things like summing the pixels over the whole image which I guess could be analog but an FPGA gives us maximum flexibility to change that up as the project evolves. I also want to take the binary images, perform combinational logic between pixels in the same location across different images, and then use particle analysis to pull out features (I mentioned Feret diameter above)

  14. We have an external calibration process so calibrating counts to volts isn't important. Yes, layout, emissions, immunity, the front end, have all been fun. Communication wasn't super hard but there were a few surprises. Testing and cert is on a system basis as this is a relatively small part of a bigger system (similar to calibration)

    I have another system where we customed a temperature input, an RS232 port, and an industrial output. We just didn't need that much accuracy so it was NBD.

    Try to spec out a similar NI system. I expect we saved a few thousand per unit and you can get a fair amount of engineering time for that.

    I was thinking, if you wanted to avoid laying out your own board, you could always buy an eval kit.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.