Jump to content

infinitenothing

Members
  • Posts

    361
  • Joined

  • Last visited

  • Days Won

    15

Everything posted by infinitenothing

  1. The advantage is that indexing an array of refnums does not change the internal semi-hidden active plot state of the control.
  2. I think either of those ideas would be superior. The graph should have a plots property that returns an array of plot references. We see this architecture with things like tabs having an array of page references. An invoke node that didn't force you to do a write when you only want to do a read would also avoid this problem. Even if it was a single action, is there a promise to maintain that in future versions? I solved this issue by wrapping the graph reference in a DVR.
  3. I just got hit by this bug. My writer below works fine (never exits) as long as I don't start enable the reader (top loop). I see my error now but the design was ripe for it. reader breaks writer.vi
  4. I couldn't actually find that example there. Is there a package I need? Or maybe it was moved or renamed in a different version of LV
  5. I don't know about you guys but I hate writing strings. There's just too many ways to mess it up and, in this case, it might just be to tedious. So, I wanted to create a way to make SQLite create and insert statements based on a cluster. The type infrencing code was based on JDP Science's SQLite Library. Thanks! create and insert from cluster.vi
  6. How was the Con? Do you know if/when the videos will be posted to Youtube?
  7. I tested two other computers. Interestingly, I found out on those computers, the consumer looking for the end condition couldn't keep up. I would think a U8 comparison would be reasonably speedy. But, once I stopped checking the whole array, I could get 11Gbps. The video was pretty useless as the mfg doesn't have recommended settings as far as I know. I don't know if I have the patience to fine tune it on my own. Parallel helps. I don't understand why but the improvement is a few times faster than you'd expect by multiplying the single worker rate by the number of workers. I'm more interested in single connections at this time though.
  8. Good tip. I hate type guessing. I'm looking at you TDMS Pallet.
  9. Localhost is a little faster: 3.4gbps Seeing 80% CPU use single logical processor and 35% averaged over all the processors iperf on local host with default options isn't doing great: ~900Mbps but very little cpu use I suspect there's a better tool for Windows.
  10. I submitted my code. I don't think I had any greedy loops. I wonder if there's a greedy section of TCP Read though.
  11. I got similar performance from iperf. My question then is what knobs do I have to tweak to get closer to 10Gbps? I attached my benchmark code if anyone is curious about the 100% CPU. I see that on the server which is receiving the data, not the client which is sending. The client's busiest logical processor is at 35% CPU. The server still has 1 logical processor at 100% use and as far as I can tell, its all used in the TCP Read primitive tcp bandwidth test Folder.zip
  12. Has anyone done a bandwidth test to see how much data they can push through a 10GbE connection? I'm currently seeing ~2Gbps. One logical processor is at 100%. I could try to push more but I'm wondering what other people have seen out there. I'm using a packet structure similar to STM. I bet jumbo frames would help. Processor on PC that transmits the data: Intel(R) Xeon(R) CPU E3-1515M v5 @ 2.80GHz, 2808 Mhz, 4 Core(s), 8 Logical Processor(s)
  13. FYI, in my particular case there's no UI. It's all headless RT code targeting a ZYNQ [sb,c]RIO. There's a few TCP loops, some logic for homing motors, some RS232 commands out, a bit of data processing/analysis, etc. I can't go out to a dll because my main roadblock is the run time engine. If I can get the run time engine working then I'll just keep everything as LabVIEW. Speaking of data flow, do you think some languages are better targets than others? The options I'm considering Java and Python mostly because I want memory management to stay out of my way as much as possible. Re: black pill. There are many other things I'd prefer to spend my time on.
  14. Why do you ask? You don't have to worry about buffer overruns if that's the sort of thing you're concerned with.
  15. Has anyone gone through the experience of rewriting your LabVIEW code into a different programming language? I'm wondering if it was a total rewrite or if you went line by line translating it to a new language? After the effort was over, was the end result still buggy? Did it take it a while to get it back to its former reliability? For people that haven't gone through that—what's your game plan if the time comes when you have to move your code over? Related thread explaining some of the context of this question:
  16. Unfortunately, we tied our cart to the sbRIO. It's the controller for our flagship product and there would be a serious hit to revenue without it.
  17. Regarding the supply chain, we found and ordered ~50 boards that are very similar to a 9651 and https://krtkl.com/snickerdoodle/ The FPGA interface doesn't look overly complicated. We can get most of that from Xilinx/Vivado since both the ARM and FPGA are on the ZYNQ. Maybe I'll start a new thread to see if anyone has more experience in Vivado. They were a little spammy but maybe the mangotree folk could point me in the right direction. It seems like a bit more of a legitimate path since NI has the VHDL export tool. @CJC IN six person@MT_Andy Regarding, "If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way" that's rough to hear. If we could keep the LabVIEW code that would Make it so that there's a chance we move back to the sbRIO after the supply chain straightens out Helps sell NI as a good product for R&D and prototyping as the code can move over more easily to the "final" product
  18. As I'm sure many of you know, there's an issue sourcing any NI products with FPGAs on them. Lead times are... out there. For anyone who can't tolerate those long lead times, they are probably thinking about a plan B. I'm wondering if anyone has gone through the process of designing a replacement for an NI product. Our application is written in LabVIEW and one of our biggest risks is that the run time engine isn't open source. There's so many test hours behind our labview app but if we run into a "bug" and NI won't support it because it's on 3rd party hardware, we could really find ourselves in a bind. How much did you use the LabVIEW code or did you just start from scratch? What's the process like? Expensive? Buggy?
  19. @Darren As far as I could tell, none of those nodes deal with the hardware io nodes I did find a couple nodes in the LabVIEW 2020\resource\Framework\Providers\lveio\ and vi.lib\eio\ folders that sort of seemed to work I still couldn't get grow to work. Here's the closest I could get fpga script.zip
  20. Can you give us more details on what hardware you're using. Attach a VI instead of a screenshot.
  21. Yes, mostly just post processing. Comparitors and other analog solutions are clever but I'm expecting a use case of needing more than one comparitor each with different user defined thresholds. I also have a use case of calculating things like summing the pixels over the whole image which I guess could be analog but an FPGA gives us maximum flexibility to change that up as the project evolves. I also want to take the binary images, perform combinational logic between pixels in the same location across different images, and then use particle analysis to pull out features (I mentioned Feret diameter above)
  22. We have an external calibration process so calibrating counts to volts isn't important. Yes, layout, emissions, immunity, the front end, have all been fun. Communication wasn't super hard but there were a few surprises. Testing and cert is on a system basis as this is a relatively small part of a bigger system (similar to calibration) I have another system where we customed a temperature input, an RS232 port, and an industrial output. We just didn't need that much accuracy so it was NBD. Try to spec out a similar NI system. I expect we saved a few thousand per unit and you can get a fair amount of engineering time for that. I was thinking, if you wanted to avoid laying out your own board, you could always buy an eval kit.
  23. NI has a great start to a solution for me with the FPGA toolkit I mentioned. I was hoping there were some other options. Maybe I can drop a request in the third party toolkit developer exchange. You can think of my ADC solution as a bit like a frame grabber. Basically, I'm going to grab the voltages off the pixel hardware one at a time. So imagine like 25MSPS and then I reshape that into 60x60 arrays so I can get 7k frames per second. Ideally, most of my processing can be done in parallel (pipelined) with the acquisition (imagine applying a threshold to each pixel as they come off the ADC). Actually, I have the acquisition going fine right now. I can grab images. I was just complaining because the mipi interface would be something I have to implement at this point. It's not a deal breaker (unless there's an inherent high worst case latency limit on mipi and then it might be) but the interface isn't doing me any favors. I am more focused on the processing on the FPGA—I don't want to reinvent but will do if I have to.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.