Jump to content

hooovahh

Moderators
  • Posts

    3,392
  • Joined

  • Last visited

  • Days Won

    284

Everything posted by hooovahh

  1. You can't without other additional hardware. The GPIO on the Pi is a digital only. Add an Arduino with custom scaling, or a I2C or SPI hardware then the Pi can read that and turn it into something useful. There are currently two separate toolkits that allow for writing a LabVIEW VI and having it run on a Pi, the NI LINX toolkit which is 2014 only at the moment and for non-commercial. Or from TSXperts the Pi compiler for LabVIEW which comes in a home and professional license. This option also lets the front panel be viewable on the Pi's HDMI output making for a neat HMI. This is all optional if you are good at programming on the Pi since it can just send take data and send it back to a VI running on a more traditional target, or I've seen hosting a webpage on the Pi.
  2. Yes I've done this but there are considerations for sure. I don't know if this is a hard requirement but the server likely needs to be running Windows, and you need full access to it for installing things like the LabVIEW run-time engine. The VI that is running is what needs to be running on that server, and using the code Thomas linked to you'll basically be pushing the front panel to a webpage. This means that everyone that views that page will be seeing the same VI's front panel. Now if NI were here they would likely say something about NXG and the WebVI technology. This allows for compiling a VI into a webpage running JavaScript. Now each client will be running this page in the sandboxed browser and will have their own view. This also can be put on a webserver and works pretty well within the limitations of NXG, which are numerous at the moment. A major benefit of this design however is that it can likely run on any modern webserver and beyond what comes with XAMPP you don't need anything else.
  3. Samsung is hiring! We are looking for a single LabVIEW developer to help with some new production end of line testers. I'm thinking this person should be CLD equivalent but certification isn't a requirement. Job is posted on ZipReecruiter.
  4. The only thing I've used in a real project is the WebVI features. I create webVIs that get data from current gen LabVIEW programs and this seems to work well enough. So database query, test status, type of information that just gets displayed as a web page. Could it be done in something else? Of course but I like G and I'm good at it. These web based projects are working and deployed. As for full real test applications fully written in NXG, it doesn't seem like something I'll try for a while since feature parity won't come for a while.
  5. I have no info but wanted to say I won't be there this year, and have made that clear to the other LAVA BBQ organizers. I'm choosing to go to the CLA Summit in Austin instead. If I do hear anything I'll be updating threads here, on the dark side, and twitter.
  6. That is seriously small and would be very annoying. Hope you have custom DPI settings for your mouse.
  7. Oh yeah I wasn't suggesting performing the value change on the booleans. I was suggesting getting the coordinates, then performing a mouse move, and mouse click, sicne I assume squish does the same thing more or less.
  8. Zou is probably right. LabVIEW does a lot of nonstandard things in their UI drawing, likely so they can be more easily crossplatform with Mac and Linux. Have you thought about possibly leveraging a LabVIEW EXE to get information about the position of other controls? I could imagine a LabVIEW EXE that opens an application instance to an already running EXE, and then get reference to VI front panel, then references to the controls themselves. It might be possible at that point to get the position of a control, based on the control label in the VI. Then it might be possible for your tool to get the positioning of a control, based on the name. But at that point one could make the argument to just use LabVIEW to control LabVIEW. Nothing sounds like a good solution.
  9. Just an FYI there is an INI key that can turn this off. It is not exposed in the options but probably should be. Changing work flows can be a good thing. Years ago I thought the best way to keep a record of my source code was to zip up all of my source everyday. I'd have a zip for each day with that days snapshot of code in it. I'm so glad I learned about proper SCC and the workflow changes that came with it. QD and conditional tunnels have likely saved me man months of time and I'm grateful for it. SCC has probably saved me more than that in the amount of times I would have otherwise lost data, or opened the wrong version unintentionally. That being said many improvements you don't like you simply don't need to use.
  10. I think most (if not all) AI DAQ hardware have a single hardware timer inside them for triggering the analog to digital converter. This is why only one task can be running at a time. That single task can be configured to read N channels of course at the same sample rate. So one solution might be to create an asynchronous task that has the sole purpose of reading all the DAQ channels in a single task, and throw them into a global. Then the reentrant VI clones just read from that global. Attached is an update which works with my simulated hardware. It will read the first 8 analog signals on "Dev1" device in a loop at 100hz. 4 times a second it will read 25 samples from each of the 8 channels and then push them to a global that limits the amount of data stored. Then in the reentrant VI instead of randomly generating data it reads it from that global. So now you can display 8 things from up to 8 channels but they could all be from channel 1. Now you can have a graph, and a digital display of channel 1, and maybe graphs of channels 2 through 7. If your hardware only has 4 analog inputs you can change the array subset function to that, and then update the enum for signal selection. Signal Selection Demo 8 Way With DAQ.zip
  11. First is there any errors? Second what is the value of Data? This is important. Leaving it blank will mean do nothing, where filling it in might mean to read specific memory registers. You'll likely want to consult the manual, or even some other text based libraries to get an understanding of what registers need to be read, and how to scale the data that is read.
  12. Tunnels was a big one for me. Was there something added to the functionality of static VI references? Maybe adding the right click for their type? Or maybe I'm thinking about features in the dynamically calling VIs allowing for run and collect being separate operations? Many QD and functions can be back saved (but not all successfully). Others I use but could live without but do use all the time is dragging into and out of structures, live drag, and remove and adding space with CTRL and Alt modifiers, oh and snapping add/remove space to 1D.
  13. Looking at older IDEs is a bit discouraging when it comes to IDE performance. When I was beta testing NXG I made some comparisons in startup time, opening a simple project, opening a simple VI, and making a basic edit. NXG was something like a couple minutes slower than 2017 for the whole process. I imagine the responsiveness of 2017 to 8.5 is likely just as large a gap in 2017 to NXG. And I remember when first using 8.x versions how it seemed dog slow in comparison to 7.1. Maybe in a few years NXG 1.0 will seem fast.
  14. You cannot target this board with LabVIEW and deploy G code to it. Only NI's FPGAs can be added as targets.
  15. Okay deep breathes everyone. (myself included I took part in this too) I think we all understand where everyone sits. I think Shaun understands that starting in 2017 NI made a change that they deemed was a bug regarding buffer reusing that caused the functionality to differ from what they expected. This is seen as a good thing by some and a bad thing by others. After watching much of the linked video from Linus my option that was far in one direction, swings just a little by closer to what Shaun was suggesting, thank you for that. But I think an important point he made in that video was that the amount of effect a change has should make or break the change. At one point someone asked what version of GCC a distro should use moving forward. And he more or less said, well the people effected by this change now is some what small so "you are off the hook". The most core library in the system effects tons of people and so breaking APIs here needs to be done cautiously. The larger the impact the more likely you should just leave it alone. Do lots of developers use user events? Yup I think so. Do lots of them use it the way Shaun is showing here, well it is hard to say but I would say it is the minority of those that use user events in general. NI has broke other compatibility in the past. Primarily I think of the INI fiasco a few years ago, but at least in that case these were vi.lib functions not on the palette so they assume few would be effected. When possible I do see NI make an effort to preserve functionality. Think about when you are opening an older VI in a newer version, there are times that a small subVI might be inserted automatically to keep the same functionality. I saw this with some TDMS functions where a function wouldn't generate an error in older versions but would in newer, so NI would insert a clear error on that specific error if you opened older code in newer LabVIEW. Most of us work in pretty closed environments when it comes to our own APIs we can break compatibility and just tell the couple people (or update the couple of projects) that things may need a relink, or some tweaking to be fixed. As teams grow our concepts of difficulties given to those using our APIs needs to grow too. NI certainly has a larger user base on their APIs than I do, and then there is the fact that my APIs are free while you are paying NI to make updates that might break your code which can be discouraging.
  16. This is a DVR analogy that represents why the event structure example doesn't work: This is more representative of what you are doing. Yes it worked previously, but it shouldn't have. If this VI worked in 2009, and doesn't in 2017 would you also claim it is a feature an not a bug? EDIT: Also that video sounds interesting so far but is quite long, is there part you find the most relevant. Double Edit: Seems I found it around 9 minutes in.
  17. Right but that isn't what is happening in the event example. Yes you are providing a type, telling the event structure what type to expect, but this isn't happening on just the first iteration of the loop (like the DVR example). You are providing a type, and providing what the event structure expects is a valid refnum that points to to the specific events it should react to. And every iteration of this loop you are providing a null reference. The DVR example works because the new reference is used, the event structure doesn't because that new reference is replaced with a null at the start of the next loop. I agree that there is a type of feedback node internal to the event structure, being able to know what it registered for last time. But by providing a new event to the outside of the event structure, it believes you are telling it to now use that event instead of the last one you told it to.
  18. This isn't the same issue as your event example above. In the DVR case you actually made a method for the next iteration to use the newly created reference. In the event case there is no shift register or feedback node. You register for something, but then the next iteration of the loop has the event structure getting that null refnum again.
  19. So you weren't implying this is a bug that everyone relies on, by quoting someone saying: It appeared to break the rules of data flow. I'd consider that a bug worth fixing, and a feature that shouldn't be relied on.
  20. I agree with the others. This should never have worked, and was clearly a bug that got fixed. In every iteration of the while loop you are passing in the reference to a refnum equal to 0. All design patterns I've seen from others have this as a shift register, usually uninitialized, which gets it's value set on the first chance it can. I tested this in 2015 SP1, and 2016 and both still increment. I only have 2017 SP1, so am not sure if this was a SP1 fix, or a 2017 fix. Still you suggesting that everyone relies on this (based on your quote of Linus) might be a bit exaggerated.
  21. This one I can answer. So the picture data type is really a string. This is a string with instructions on how the image should be rendered. So imagine if the instruction is something like "Draw a rectangle that is 50 by 50 starting at 0,0 and is solid red", then the next instruction is "Draw a rectangle that is 50 by 50 starting at 0,0 and is solid blue". Both instructions will be embedded in the string, one gets drawn, then the other on top. Obviously in this example the red rectangle can't be seen, it will be under the blue one. But the instructions will still draw all the operations, even the ones that won't be seen. Now if this is in a shift register, then this string of instructions will keep getting longer as we concatenate more instructions to the end of the string over and over again. Here is a post over on the dark side talking about it a bit. And here is an awesome post by Norm talking about how the image instructions are stored, and how they can be manipulated (as strings) to perform image translations (repositioning in the X and Y) by changing these string values. As for suggestions. In the past what I often need is a picture control that is built of of several other images. They can be combined with concatenate string in the order you specify. So often times I will keep in memory the pieces of the over all image, so that I can quickly recreate the end image by swapping one out. For instance lets say I have a button, and I have an overlay for when the mouse is over it. I will draw the button, draw the overlay, and then keep them both in some private data. Then concatenate the two when the mouse is over the picture, or just show the button (that I've already drawn) when the mouse isn't. This is what I did in my Toolbar class. Here each button is also it's own image I keep track of, then combine them all to draw the whole result. I don't rerender the whole toolbar.
  22. Yeah don't use them with Subpanels. Either go all out with the Windows API abandoning subpanels, or use subpanels exclusively. I threw a quick example of the Windows only route here showing a somewhat unlimited way of spawning windows. As for a subpanel only route I have the Image Grid, which is subpanels of subpanels.
  23. Yeah if this is using a Windows API with Parent/Child relationships you will run into issues. When using a subpanel there is no HWnd for the inserted VI only for the VI that has the subpanel. More information is needed.
  24. This doesn't belong in this thread, but that being said why do you need to run the VI every time? Why doesn't it configure and save the palette in a way that the settings are retained?
  25. I never liked an actor design that didn't allow for an actor to run without the rest of the application. It just makes it so much easier to be able to develop and troubleshoot an actor when you can run that actor as non-reentrant and have it work without needing the rest of the frame work. I'm sorry that doesn't answer your situation, but in my actor design an actor can just be ran in parallel with the normal code, and it will publish it's data, and subscribe to user events (including quit). It won't have any application wide config information but I just have it default to something if the rest of the application can't be found. In this case actors written for the large application, can be copied to a new project and called like any normal subVI. Modularity and reusability.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.