Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Taylorh140

  1. Actually the fact that the data is a string of commands makes a lot of sense. When I did not use smooth display I saw the individual elements being drawn even when i had Front panel updates defered. Since i Used bunches of static text I decided to only draw it the first time. by using the erase first and setting the value to 2(erase every time) and then to 0(Never erase). I tried to use the 1 setting (erase first time) but it always erased the stuff i wanted to draw the first time. I notice that if i use a sub-VI to do operations like erase first and such that it didn't propagate up a level and the same operations needed to be done on the top level picture control. After those things I went from 30 fps to 120 fps which i find more acceptable. The control only used the Picture Palette drawing functions and looked somewhat like the image attached except with real information:
  2. I have never gotten the performance that I desire out of the 2D picture control. I always think that it should be cheaper than using controls since they don't have to handle user inputs and click events etc. But they always seem to be slower. I was wondering if any of the wizards out there had any 2d picture control performance tips that could help me out? Some things that come to mind as far as questions go: Is the conversion to from pixmap to/from picture costly? Why does the picture control behave poorly when in a Shift register? What do the Erase first settings cost performance wise? Anything you can think of that are bad ideas with picture controls? Anything you can think of that is generally a good idea with picture controls?
  3. I typically view the conditional disable as a different kind of item. Not really that different from a case structure with a global written to the conditional terminal. Run this code or that code depending on the configuration of the system. But code is still run on the production environment. This idea is a bit different because the code is run only on the development environment. The idea is in some ways a very convenient prebuild step. Also it increases consistency. Why do prebuild on pc and vi memory initization on FPGA when you could do it in a uniform fashion. Why not use LabVIEW as its own pre-processor? Value would be for things like reading configuration files from the development PC, that i don't want editable in production. Easily Labeling the front panels SVN REV and build time. Perhaps event reducing startup time for LV apps by preprocessing startup code. To me I feel like this would be a great additional structure. I do appreciate your thoughts on the topic, thanks.
  4. I posted This on the Idea exchange a few days ago. I have been wondering why no-one sees the need for constant generation at compile time. I think this would be especially useful in FPGA, but also for constants that need data from the build environment. Perhaps there is just a better way of doing these types of operations? Let me know what you think?
  5. Yeah, I was actually doing the opposite. Using an array of strings and a value generating a enum at runtime. I was afraid what i was trying to do might be confusing. I might be getting encoded data from a device and to decode it instead of using a cluster (which contains a great deal of useful information apart from the values) using variant attributes. Since the specification is sometimes in a document it is easier to pull from that source as apposed to decoding everything using LV wires and clusters. However when looking at variant attributes especially for enumerations its hard to see what the value actually means. this allows me to have a probe view of the data that looks like the right as apposed to the left.
  6. I So it wasn't so bad to put this together. I hope it ends up being useful! Variant_Scoped_Enum.vi
  7. So sometimes when you do protocol decoding it is convenient to use a variant to store decoded data. For example you might have a 2 bit enumeration that is decoded like this: 00 -> Good 01 -> Bad 10 -> Ugly 11 -> Catastrophic If you cast the value to an enumeration that contains these values before hand you can see them on the variant control. If you use a Ring you will only see a value. I know that the LV flatten to string contains the enumeration strings but the encoding is a bit of a mystery, although it looks like the openg palette has figured some of it out to some degree. But to me it doesn't look like there is any reason i couldn't generate an enum to use inside the scope of a variant. Has anyone done this, or know how to generate the string for this purpose.
  8. I found this gem! Strange that that isn't the recommended setting in the tutorial. http://digital.ni.com/public.nsf/allkb/4A8B626B55B96C248625796000569FA9
  9. I am attempting to use a LabVIEW executable as a background service. I had two good pieces of information from these sources: http://digital.ni.com/public.nsf/allkb/4A8B626B55B96C248625796000569FA9 http://digital.ni.com/public.nsf/allkb/EFEAE56A94A007D586256EF3006E258B And now the window does not flicker or appear really at all, also it doesn't show up on the task bar. However IT STEALS FOCUS from other applications when it starts. This interrupts data entry. I am using some .net code to start up my processes (I was hoping that this would do most of the work for me) I was curious if anyone had any other suggestions?
  10. I see what your saying now. When looking at a single ASCII character it would make sense to include the last point just like they do with integers. however there is not a character type only a character array type (string), and in an effort to make the case statement ranges more useful in the string realm they made a notation that is essentially a begins with function. (regex would probably be more useful/but slightly slower). This does explain it a bit though. @ShaunR I am also glad it is not C++.
  11. Absolutely, I find it weird. It doesn't match the Integer notation and even more so it includes the first item but not the last. I was using to match numeric 0-9 which yes you could use a regex but its a range 0-9 simple case statement should work. its weird to put "0"..":" I had to reference my ASCII table just to know which one to use. And I do agree it is stated in the reference above, but I generally dislike inconsistency especially with logical concepts that should be transferable; just because something is known doesn't mean it cannot be incorrectly inferred or forgotten. That being said, I would really like to know the rationale, however if there has to be a discussion it doesn't bode well for the "feature".
  12. I noticed something inconsistent today with case selector, specifically the inclusion of the sequence terminators. Guess the outputs for Q1-4. The purpose of this post is mostly to help others not make this (seemly easy) mistake. Also to ask how is this discrepancy useful?
  13. So I did a quick run, and I probably need to state that the above methods cannot really be compared apples to apples. Partially for the following reasons: Some methods only support one element at a time input. If you need to enter 1000 pts at once these methods will probably be slower and involve more operations. Some methods like the circular buffer are much more useful in certain situations like where the buffer is needed in different loops or are acquired at different rates. here are numbers for single point(one element at a time) inputs: How long does it take to process 10000 input samples with a buffer size of 1000 on my computer?: Taylorh140 => 8.28579 ms infinitenothing => 2.99063 ms (looks like shifting wins) Data Queue Pt-by-pt => 9.03695 ms (I expected that this would beat mine) hooovahh Circular Buffer => 8.7936 ms (Nearly as good as mine and uses DVR) I would consider all these to be winners, except maybe the Data Queue pt-by-pt (but it is available by default which gives it a slight edge), Perhaps ill have to do another test where inputs are more than one sample. Note: if you want to try the source you'll need the circular buffer xnodes installed. buffer.zip
  14. I find myself frequently looking for a good pattern for collecting a pool of array elements until they reach a certain size and then removing the oldest elements first. I have used very stupid methods like a bunch of feedback nodes being fed into an build array node. But today I thought up one that I really enjoyed and I thought that I'd share it. Its a simple pattern and no crossing wires . Perhaps someone has thought of something better, if so don't hesitate to share.
  15. This is a really nice step in the right direction. It is notable that this kind of development seems easier for developers of nxg than traditional LabVIEW, perhaps they cut out a great deal of development overhead.
  16. I have to admit I've never gotten to use the state-chart module on a project, but I've always wanted too. I really enjoyed it when i evaluated it, but i worry it wont be available to nxg users... ever. Of course I can hope that it will be re-tooled and better than ever on nxg. What do you guys think will happen?
  17. After working on the set cluster element by name xnode, it made me realize i could use the same concept to convert a variant array to a cluster. The technique is actually pretty simple, the xnode generates a case structure for each element in a cluster in cluster order, wherein a bundle by name is used to set the value and an unbundle by name is used to get the type, a variant to data is used to convert the data. This has some benefits over some methods, you are not limited to 255 elements, although that is not usually the case some of us are paranoid that clusterosaurus giganticous will be larger than expected. It also has a draw back that is that when converting from a variant array all the elements must have unique, non-blank names, and this is usually the case. I think this technique (though very brute-force) might be useful for some other things let me know what you guys think. VariantArrayToCluster.zip
  18. Now supports recursive cluster elements for selection. SetClusterElement.zip
  19. I added a LV2013 Version, I hope that helps. Let me know if you have any issues or comments. I have always wondered why this isn't a standard function.
  20. This Xnode allows setting a cluster element by label string without using references. I pulled a great deal of inspiration from Hooovahhs Variant Array to cluster xnode, which i suppose this could be used for, another benefit is its not limited to 255 elements. Its mostly experimental because I haven't used it much. SetClusterElement.zip SetClusterElementLV2013.zip
  21. I noticed a little gem that was added, the "Is Value Changed.vim" but i found its implementation curious. typically when i want this functionality i would use a feedback node. Are feedback nodes becoming taboo?
  22. I think you rediscovered the source! Thanks! (this seems like an updated version of mine.)
  23. There is often a disconnect from what IT wants and what is needed for Testing. One of those things is windows updates, IT really needs interconnected computers to be up to date. But often this means a policy that forces computers to shutdown and reboot, which is awful for longer test. (the solution for this one is difficult to say the least). Another is forced logoff and locking screens and such. IT policy on power saving is usually the more the merrier. However when running Test this kind of thing needs to be disabled. I certainly do not want my computer hibernating when I am running a test. (this doesn't include weird USB power saving settings) I have seen some interesting solutions to this one: Mouse Jigglers -> These exist as both hardware and software solutions. (what has the world come to?) https://www.amazon.com/dp/B00MTZY7Y4/ref=cm_sw_r_cp_dp_T2_1YOqzbZXHNQW2 Disabling power settings in windows (requires admin) (sometimes group policy can reset this) My personal favorite:The Prevent Screen Saver which is attached. (I don't remember where I got it from) It uses the same call that the windows media center does, to prevent the system from sleeping. Its also nice that when the application closes the screen can lock again which I find better. Also it doesn't require admin. Testing requires the system configuration to remain largely static. however for security reasons this rarely happens. Testing would like internet access to send notifications and pass information to outside data-servers. Some solutions like an air-gap are not really a complete solution. what is really needed is a one way communication valve. So the testing computer can notify and send data but cannot necessarily receive it back. This also implies that data exiting the computer must be guaranteed to be collision-less to prevent data-loss since handshaking would not be possible. Perhaps Virtual images will provide a solution in the future but I still think they integrate poorly with hardware. I think that in general Windows standard settings align more with IT than what is required for good LabVIEW stuff. There is a great many things that need to be considered for long test using LabVIEW on windows, and the number is increasing every new windows release, and IT isn't helping. It would be nice to have library to handle these needs (maybe the windows people have a plan ). PreventScreenSaver.zip
  24. Currently SVN, tortoiseSVN. (separated source code from executable). I would change to git, but i haven't decided the best implementation or client.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.