Jump to content

Mads

Members
  • Content Count

    354
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by Mads

  1. I have seen this a couple of times in my applications and a fix then has always been to either remove all but English, or support all languages on the run-time languages page in the application builder (perhaps any change there is what is really doing the trick, I have not explored that as I have always just moved on when the error disappeared).
  2. Just to tie off this thread: The reason for the glitch was that the wiring allowed for a minuscule race-condition ๐Ÿ˜’. One minor adjustment was needed to ensure everything was *only* decided by the enqueue scheduling. So no worries about that anymore ๐Ÿ˜€
  3. Nice discussion, thanks for the link. The quoted statement seems to contradict my observations yes. I have not checked if the misbehavior was absent in any of my earlier LabVIEW installations yet...I am working in Windows LabVIEW 2020 SP1 at the moment.
  4. The original producers acquire a reference to the consumer queue, and call enqueue in a pre-allocated reentrant VI...But when multiple copies of these are waiting in parallell to enqueue, the time at which they started to wait does not decide when they get to do the enqueue. So what I did as a test was to test how this worked if the reentrant VI was non-reentrant (which does not work in the actual system as VIs enqueuing to a different consumer should not wait for the same VI, but just to test now) - and that made everything run according to the time of call. I guess this comes down to th
  5. If the enqueue and wait for a reply function of all producers sharing the same consumer is put in a non-reentrant VI, the execution *is* scheduled according to the order in which the producers call it. So that is one solution. Assuming that the execution of enqueue calls (or rather their access to the bounded queue) would be stacked in the same way; ordered by the time of call, seems a bit less obvious now that I think about it, but the level of queue jumping is still surprising. If for example there are 5 producers, 4 will at all times be waiting to enqueue (while the fifth in this case
  6. Does anyone here know how LabVIEW decides in which order multiple instances waiting to enqueue to the same size-limited queue get to enqueue?๐Ÿ˜ต If e.g. a consumer has a queue of length 1 (in this case it is previewing, processing, then dequeing to ensure no producer is fooled into thinking that the consumer has started on their delivery just because it was allowed to enqueue 1 element...) and multiple producers try to enqueue to this consumer, I have (intuitively / naively) assumed that if Producer A started to wait to enqueue at t=x, and other producers try to enqueue at t>x, producer
  7. I think this is about as wrong as it can get. If an indicator is wired only (no local variables or property nodes breaking the data flow) it shall abide the rules of data flow. The fact that the UI is not synchronously updated (it can be set to be, but it is not here) can explain that what you see in an indicator is not necessarily its true value (the execution, if running fast, will be ahead of the UI update)- but it will never be a *future* value(!). As for breakpoints they do not exist just in the UI - they are supposed to act at the code level, and their execution should be controlled by
  8. In the few cases where performance is that critical you will probably have to get by without traditional debugging anyhow. Do you expect you comfy car seat to occasionally disappear, and accept that as a consequence of wanting a quick car...? Sure, in the rare cases you need it for drag racing ๐Ÿ˜„ I do not consider key features like data flow and breakpoints something that should be allowed to occasionally break/act randomly. Either you have them and they work as they should, or you remove/disable them and explain/visualize why they are sacrificed (to get performance) until you are able to
  9. As drjdpowell mentions the bug could be caused by optimizations, but it should still be considered a bug that optimizations are allowed to interfere with the data flow in debugging mode. I suggest you post it on the ni.com forum as well and see what NI says.
  10. Looks like a bug to me. It is not restricted to your example though. Breakpoints should execute according to the data flow, but often do not.
  11. Here is one that involves a nice mix of small challenges: My first assignment after being hired as an engineer back in 1998 was to write a multiplexer and demultiplexer. In that case we had 8 instruments outputting readings as an ASCII string every second (fixed length message containing a numeric value: "AA 2500BB\r\n"), and all those strings had to be read from 8 separate serial ports, tagged with a channel (c1, c2, c3 etc..) and then sent on through a single serial link (because we physically only had two wires available) to another PC where the signals would be split into the o
  12. I definitely prefer the pre-SP colors and icons. The SP1 LabVIEW "20" Icon marking is completely unreadable...and the fonts, font sizes and layout of the welcome screen is all over the place. I do not understand how these things pass quality control๐Ÿคฎ Post-sigh: And as always upgrading to SP1 the license is no longer supported by our Volume License Server (even though our SSP agreement runs for another year...) - so a manual request for an updated license is once again required...Rinse and repeat later for the 2021 release...๐Ÿ˜’
  13. Who said anything about debugging a built application, it's about seeing what you get without having to build it - because WYSIWYG. Many applications have multiple windows that run in parallel, and I want to see them like that during development. And I want multiple diagrams and front panels open while tracking the data flow and/or inserting debug values. I even want to be able to have panels open just to see them while I am working on something related, because it helps me maintain the full mental model of the thing I am working on. I do not want to be bothered minimizing windows all the
  14. For the first versions of NXG it was not possible. Then it started to allow you to have multiple instances of the VI open and hence to see both the diagram and the front panel at the same time, but each window had so much development-stuff surrounding it that it was not practical to have much more than one or two open. Hiding any of it to free up space and/or to see something closer to what you would see in the built application was not an option.
  15. Having the ability to work with multiple front panels viewed as they will look in the built application, and looking at multiple diagrams at the same time, has very little to do with break-points and reentrancy. It's about WYSIWYG, testing, and having a good understanding of multiple interacting parts of your system. Having a thin line between what you see in edit mode and what you get when running is invaluable, not just to the understanding for beginners (which is a great plus), but for anyone wanting to avoid surprises because they lost the connection between the code and the result...
  16. Mads

    Dear NI

    I see a lot of people wanting this, but why? We code graphically after all. The way to make sense of the underlying code to a G-programmer is to present it as G-code, not text... Ideally we had a SCC-system made specifically for graphical code, but I do not expect that to become a reality (unless someone made it on top of an existing one perhaps). Personally I live relatively comfortably with the solutions we can set up already, but would prefer to see it better integrated into LabVIEW and/or have out of the box solutions on how to get started with various major SCC alternatives. If
  17. This mistreatment of WYSIWYG was the worst of NXG. Having multiple front panels and block diagrams open at the same time, and being able to jump from run to edit mode quickly to do debugging and GUI-testing is one of the core strengths of LabVIEW. The lack of understanding of this was also reflected in other changes, like the removal of the Run Continuously-button. The front panels need to present themselves as close to what they will be during run-time as possible (greatly lowers the threshold for new users in understanding things, but also helps experienced developers maintain a
  18. The new branding is not my cup of tea so hopefully that does not tell too much about the new management. Reducing (the need for) administrative positions could be a good thing. As for raising the quality and speed of the LabVIEW development I hope they use their savings to keep and build a highly skilled, tight nit, centralized team. The developers should all worship Graphical programming๐Ÿงšโ€โ™‚๏ธ๐Ÿงšโ€โ™€๏ธ, even though many of themselves have to be proficient with many an awful text based tool. If they have seen the light from the many lessons about the uniqueness of graphical vs textual pro
  19. Here's hoping the right lessons have been learned, and that things will jump and move in a better direction from now on.
  20. That would make sense if the question was whether they would support that a third party created a tool based on this type of manipulation. I do not understand why they do not support it within the project explorer though. When they control both the file format and the editor, supporting this type of target copying would just be a matter of updating it to their new format. They already convert the project file to new versions so that part would be taken care of.
  21. I am sure it is possible to mess up any type of SCC system that way ๐Ÿ™ƒ The main complaints I have with SVN really is the slowness - mainly related to locking (can be sped up *a lot* if you choose to not show the lock status in the repo browser though), and the occasional need for lock cleanups... (When someone has been checking in a whole project folder and it did not contain all of the necessary files for example...).
  22. Slightly related topic: I wonder what the trend looks like for the share of questions in the NI discussion forums marked as resolved. Based on my own posts there, it seems to get harder to find a solution to the issues I run into. I am not sure if that is just because the things I do in LabVIEW are closer to the borders of regular use / getting quirkier though, or if it is a sign of declining quality in the products involved. I suspect it is a mix of both. It would be cool if such statistics were readily available. A trend of the posting rate per forum/tag for example could reveal shi
  23. Sure, that's basic (always dangerous to say though, in case I have overlooked something else silly after all, it happens ๐Ÿ˜‰). The lvlib and its content is set to always be included, and the destination is set (on source file settings) to the executable. The same goes for the general dependencies-group. The dynamically called caller of some of the lvlib functions on the other hand is destined to a subdirectory outside the executable, and ends up there as it should. But then so does lots of the lvlib-stuff - seemingly disregarding that is destination is explicitly set to be the executabl
  24. I happen to have some JKI JSON calls, among other things, in a dynamically called plugin, and it seems that whatever I do in the application build specification to try to get all those support functions (members of lvlibs) included in the executable, the build insists on putting the support functions as separate files together with the plugin (the plugin is here a VI included in the same build, destined to be in a separate plugins folder). (Sometimes I wonder if there is a race condition in the builder; what does it do for example if there are two plugins include din the build that will c
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.