Jump to content

craig

NI
  • Content Count

    3
  • Joined

  • Last visited

  • Days Won

    4

craig last won the day on March 11 2018

craig had the most liked content!

Community Reputation

12

About craig

  • Rank
    LAVA groupie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I just became aware of this thread and I'm not going to go back and comment on all the discussion points that have been made before, except to say: The original code which did not use a shift register was wrong, but happened the work because of a bug in the LabVIEW inplaceness algorithm. Any input tunnel should retain its original value of every iteration of the loop, so it should've stayed not-a-refnum every time, and therefore the dynamic registration should should have been lost where the loop iterated. The issue was that the left dynamic registration terminal (which is always in-place to the right one) was also in-place to the input tunnel, causing its value to be stomped incorrectly. This is clearly a bug and needed to be fixed. It violated dataflow 'rules' for how tunnels are supposed to behave, and could've made some correct programs yield incorrect behavior. This is not a change that would break a correctly written program, so it does not qualify as an "API breaking" change; this usage has always been wrong but happened to work due to a bug. (This is categorically different than changing behavior of correct code or even changing undocumented behavior, which we try strenuously to avoid.)
  2. This DAbort has to do with trying to access Event Data for an event after the event's refcount has gone to zero and it has been deleted. This is one of those "shouldn't ever happen" kind of situations, so we'd be very interested in what you are doing to generate the error. Perhaps your QMH nesting also involves nesting Event structures themselves, possibly operating on the same event queues? If you could post your VIs, or even better, a simplified test case, I'll fire a CAR with that information and make sure it gets investigated. Thanks Craig Smith LabVIEW R&D
  3. Indeed, LLVM is a fantastic compiler library; full-feaured, powerful, and well-designed. As mentioned in the linked article, LabVIEW has been using it to implement the backend of our compiler since LV2010. We hope to make even better use of LLVM's features in the future, such as improved vectorization of user G code, and supporting generating AVX instructions on supported processors, and other things. One thing limiting our ability to fully harness the power of LLVM is compile-time performance, however. LLVM is a lot more sophisticated than our pre-2010 legacy compiler, and takes several times longer to generate machine code. Certain pathologically large or complex VIs can cause compile times which are unacceptably long, and this is the reason for the "compiler complexity threshold" setting which gives users a way to conditionally disable LLVM for large/complex VIs. Unfortunately, having to continue to support our legacy compiler backend hinders our ability to fully leverage LLVM, because anything we implement using LLVM has to have an alternative way to implement without it, or be optionally disabled. We have been researching ways to mitigate the compiler time issues, so that compile time is always linearly proportional to G code size, and hope to remove the Compiler Complexity Threshold in the future, or tweak its function to reduce optimization levels rather than disabling LLVM entirely. LLVM is being constantly improved and refined by the LLVM team and the open source community, and we're excited to be able to continually draw on new features and optimizations as LLVM matures over time.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.