Jump to content

jacobson

NI
  • Posts

    161
  • Joined

  • Last visited

  • Days Won

    10

Posts posted by jacobson

  1. I don't think it's always better to use the FPGA over DAQmx. I think the FPGA can be very useful if you need to do some sort of inline processing/scaling, some custom triggering scheme, or if you want to do some closed loop control completely outside of the CPU, but if you're just going to be constantly streaming data to file (basically a headless data logger) then DAQmx would be my choice. One way of looking at it is if your FPGA code is just going to be a passthrough then you probably should just be using DAQmx.

    For the communication scheme, we would need to know more about how you're going to be interacting with the cRIO. That said, I would probably avoid having the cRIO act as a Modbus slave unless your host application is the master for other Modbus slave devices and you want to treat the cRIO the same as your other slave devices.

  2. 6 hours ago, hooovahh said:

    I have seen others on the forums state that this can be an issue.  What are the symptoms of hitting this limit?  Should I just set this value on all development machines just in case?

    When I had a customer run into this problem they would have a DAbort "Couldn't create 24 pen" in drawmgr.cpp and before that were a ton of "GetDC failed in ISetGPort" DWarns coming from image.cpp (like 15+ DWarns in the 2 seconds before the crash, some within milliseconds of each other).

    • Like 1
  3. My first thought would be why even think about building your own solution if you haven't explored the existing alternatives. I only did some minor tests with Francois' API but the client seemed to work well in both Windows and Linux so if you built your own API and posted it to LAVA the first question I would probably ask is why I would want to choose your API over the one I already was using. I think you have to spend some time evaluating existing APIs so you can have a set of reasons as to why you're building yet another API.

    Even if you have reasons for not liking an existing API it might still be a good use of time to see if you can address those issues by working from the existing API. As an example, if you find that the performance isn't good enough in pure LabVIEW you might not need to start from scratch. If you can get a big performance boost by replacing a few key internal VIs with DLL calls maybe you can bring that to the owner of the existing API and see if that's a change they would make.

    If they're unwilling to make those changes (maybe they find value in keeping everything in pure LabVIEW) then you can start working on your own API with very clear reasons as to why someone would even want to use it. 

    • Like 2
  4. On the FPGA side, reading 2 U32s or 8 U8s shouldn't make a difference from a throughput sense. Some old info I found internally basically said that if they don't have the same throughput it's a bug.

    I also don't think the DMA throughput should be effected. If I remember correctly, the DMA engine will try to send multiple data items up at the same time to minimize the overhead of PCIe packet headers.

    • Like 1
  5. 1 hour ago, X___ said:

    LabVIEW is now third party software for NI?

    Kind of consistent though with making it its own independent product line, the survival of which will be justified by number of annual licensed paid for...

    Software is being modified by both NI and third-party so I think it would be safe to assume that LabVIEW is included under the umbrella of "NI software".

  6. In past years there was often a meetup at some random bar on the Sunday night before NIWeek to get some drinks and hang out. Anyone know if there are similar plans in motion this year? If not, does anyone want to meet up somewhere Sunday night?

  7. The question below that seems to indicate that there may not be a call for presentations because they'll just pull from previously accepted sessions but who knows. Hopefully there will be good technical presentations still.

    Quote

    I was selected to speak at NIWeek 2020. Will I be given a chance to speak at NI Connect 2022 automatically?

    There is not an automatic transfer of all previously accepted sessions into the 2022 program. However, we’ll prioritize currently approved sessions when 2022 conference planning begins and contact you regarding next steps.

     

  8. On 1/21/2022 at 8:22 PM, ShaunR said:

    I think this is a very positive thing that more should be encouraged to take part in.

    For Application engineers and R&D Devs to do sabbaticals, on loan, to companies would build good relationships in the industry and ensure tight cohesion and understanding between the developers of LabVIEW and the customers' that use LabVIEW. There is nothing like a dose of ones own medicine to encourage improving the taste.

    The two times we had people in our AE team do this we never got them back 🙃

  9. On 1/1/2022 at 10:11 AM, ShaunR said:

    It's a lot worse than that. It affects all text languages that use a unicode compiler (Python, C++, Delphi et.al) and is undetectable by visual inspection of the source code. It isn't a programmers application with a bug - you can't trust the source code is doing what you think it is doing.

    VSCode's October update changed it so directional formatting characters are displayed by default.

    https://code.visualstudio.com/updates/v1_62

    GitHub also added a warning if you are looking at a file with these characters so hopefully more IDEs are being updated to make this vulnerability more obvious.

  10. I would generally suggest just using the language you're most comfortable with.

    I think Python is generally easier to integrate with but you are limited in data types (no classes). If you're just doing some signal processing though that may be able to develop an interface around that limitation without much difficulty. I also don't think LabVIEW supports the ability to call Python from a specific virtual environment which is definitely annoying.

    I think the C/C++ integration is a little less straightforward than Python but if you're comfortable with the language and you read the help documentation on the Call Library Function Node and how LabVIEW stores data in memory it's not that bad. You may also have to mess with some LabVIEW memory management functions which can be annoying the first time you use them.

  11. On 11/10/2021 at 9:06 AM, hooovahh said:

    NI used to have all new hires work support first before going to other departments.  This in general made support a bit green, and you'd need to escalate a couple of times before getting what you needed.  My career started as a co-op and so being thrown into the deep end of the pool certainly helped me learn quickly.  And I liked the idea of NI people all starting out having to quickly get familiar with NI's offerings, and being close to the customer issues.  I suspect NI has gotten feedback over the years that this model for support didn't work well and I heard NI was changing this policy.

    You are correct that phone/email support is now being handled by a separate Technical Support Engineering (TSE) team which is now a career position (meaning they have senior/principal level engineers working in the department). As examples, you now have folks like Darren N and Norm K working as TSEs. This is still a relatively new change (2 years?) so there are still a lot of newer engineers but I think things are moving in the right direction with hiring very experienced engineers and being able to keep engineers within the department rather than constant attrition to other areas within NI. I know a few people in that group who started there out of college and are still there 5+ years later which definitely would not have happened previously.

  12. You can always try calling in on a new case or transfer to a different technical support engineer (I know if used to give you this option if someone wasn't picking up). You don't have to ask them the same question but they should be able to tell you what's going on at least (someone's OOO, case is in some bad state, or more often than not both people think they're waiting for information from the other).

  13. I have two 9074 cRIOs I use as bookends at work and even a couple of books I got from stuff people were trying to get rid of.

    Most of the hardware I would be able to take home is old enough that I don't really want it (I'm not into home automation or maker stuff so I really don't care about old tech).

  14. Even though we know we will never go to the True case, I doubt LabVIEW would ever be able to determine that and properly remove dead code. It would require LabVIEW to know that the timestamp output of a primitive converted to a double will never be less than 0. If you wire a false constant into the case selector the VI won't lock up so it does work in that instance.

    I remember running into issues with events being queued when the VI is only reserved to run by (accidently) embedding VIs that weren't running into subpanels. The VI isn't running but clicking anything can still enqueue events and cause everything to lock up.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.