Jump to content

hooovahh

Moderators
  • Posts

    3,365
  • Joined

  • Last visited

  • Days Won

    268

Everything posted by hooovahh

  1. For the data to be usable you have to know what it is at some point. Sure you can say you have a variant but if I'm going to get an average of a variant I need to know how to interpret the data. Should this be a string, that is then transformed into a numeric? Or is it a boolean and should be treated as a bit? Or is this an enum which corresponds to some value? If you don't know what the data type is just yet, but you know that in two places the data type is the same, I highly recommend a type def. Using the OpenG Write Key (Variant) and Read Key (Variant) under the Variant Configuration palette I can write and read any thing to a file. Quite nice but at some point I'm going to need to turn that variant into a usable data type. Variants are great for a transport layer where we don't really care what the data is we just send it and then the place that receives it needs to convert the variant back to something usable. The Write and Read Key can be thought of as the transport layer where we send some data as a variant to a file, then we receive it back when we read it. For it to work properly ideally the write and read have the same data type. If these are both linked to the same type def, then that means updating the one type def will update both the write and the read to use the new data type. We may not know that data at design time, but we know what ever we write, we are also going to want to read and the data had better be the same. I've seen variants used in Queues, User Events, TCPIP, File I/O, FIFO, and several other transport layers. These all work great because they don't care what they are given they just send the bytes and who ever gets the bytes is responsible for getting usable data out of it. If you aren't dealing with a transport layer, and instead are just passing data around a VI but don't know what it is yet, there isn't really a need to use a variant, just stick with the type def and update it when needed.
  2. Often times I find myself being distracted by the thoughts that the work I am doing would be faster if I just did it instead of spending time scripting it. There are a few times where I spent a long time working on some scripting tools that I figured would never be used or only be used once or twice. I was quite delighted to find that not only I had been using these tools, but other developers have been. Now these tools are part of my companies application building process and saving tons of time. They aren't all like this of course but I know now that my intuition on if a scripting function is worth it or not can be incorrect at first. Only time will tell if it is worth it.
  3. This isn't a working solution but it might help. I would use 7-zip if the other native zip utilities don't like .GZ. .GZ is not a zip but many zipping tools support it. http://lavag.org/topic/16513-can-we-prevent-zlib-compress-dir-from-replacing-accented-characters/#entry101116 There someone asked about zipping a folder of files in a way that wasn't supported so I suggested using 7-zip in a command line to get the function needed. You can look at the code and modify it to extract the .GZ to a temp location and then you can read the files from there.
  4. Post some code. I can't be sure the data type you are working with, or what you are trying based on your description but I think I know a way to go from the LabVIEW image data type to a system stream image in .NET.
  5. I don't know why that idea doesn't have more kudos. Are people not using more quick drop items then what comes with LabVIEW? As soon as you start installing some of the ones posted on NI's site, or making your own, you realize quickly that QD only works well if you have very few functions. As soon as you have more then say 5 you start to run out of shortcuts, and only having one modifier key (the shift key) is also quite limiting. If QD is going to grow it needs to be overhauled. Not necessarily in the way TST suggested but in some way.
  6. Do a repair install of LabVEW (or DSC). Go to Add Remove Programs, choose NI and modify it. Then a menu will come up with all the NI software installed. Choose LabVIEW and click repair. This will reinstall all the files needed to run LabVIEW. Note that you will need the LabVIEW CD/DVD as a source.
  7. Curious what issues you've seen. I'd say we inline 80% or so of our reuse library and haven't seen issues in 2011, 2012, or 2013 as a result. What issues should we look for?
  8. Let me save you the time. At the moment there is no easy way to do what you want. Well there is and it is keeping track of it the way you suggested. I've seen some test VI somewhere, where it took a print screen of the block diagram, then did vision checking to determine if the run button was running, or reserved to be running. That is the methods people have tried, in an attempt to do what you want programatically. Sorry.
  9. The great leader demand video game entertainment perhaps.
  10. This is not true. You can have a variant that has data in it being your "wrapper", and then that variant can have attributes your "hash table". You can then read the attributes, and you can convert the variant back into the data type it was before. The two uses aren't completely unrelated. Say I have a waveform. That can be XY data, or X0, delta X, Y so maybe I choose to put it into a variant so it can be either. Then I want to keep information like scale, or plot names I could store those as attributes so my single wire has all that information. Sure in this case maybe a cluster would be a better option but it is one way a variant can hold everything you need.
  11. First issue is in state 2 (you should probably use a type def enum) you aren't passing through the bottom dynamic data type wire. This means that the sine wave data is lost and isn't available for the comparison in state 3. If you probe the wire in state 3 you'll see the top wire has data the bottom does not. Also the express VI doesn't seem to be doing what you want either. Instead I like to use the raw numbers and do the comparison my self. This way I can probe along the way and see what is going on. I take the Y and and subtract a range then perform a In Range on the other Y values. Then I say are all in range using the And array function. Test Code Hooovahh Edit.vi
  12. Attached is a quick example. Basically you set the values as a look up table where a string corresponds to a value. Then you can get that data back by providing the string and performing a read. This is apparently a very efficient method of getting data. You'll need to convert the variant read back into something useful and for that you need to know the data type that it was written in. Here you may run into errors. Like in my example if I tried to read all attributes and said that the data type was string, one would read fine, the other 2 reads would generate an error because the data type isn't the same as it was written with. EDIT just saw you use 2010 so here is the same VI in 2010. Variant Attribute Example.vi Variant Attribute Example 2010.vi
  13. Explain to me how this is a matter of life and death? You are not going to die if no one helps you. And if you are then you should contact the authorities and not a LabVIEW forum on the internet. Please do not use sensationalist titles in the future to grab the attention of the users. Spammers use the same technique and they are handled by being banned. LabVIEW ships with several games. Search for Moon or Moonlanding in the example finder for one.
  14. Probe the path wire going into the New Report.vi when the VI is ran. You'll see that the path provided as a xls template is a path to a location that doesn't exist. Your XLS file is in the same directory as your Main.vi but you stripped the path twice meaning the folder above where the main is. Remove one of the strip paths to fix that then if you have the report generation toolkit installed it will open excel and make your spreadsheet. It doesn't get saved because you didn't use the Save Report to File. Call this before the Dispose Report. EDIT: Cross post http://forums.ni.com/t5/LabVIEW/Error-7-with-Report-Generation-Toolkit/td-p/2695725
  15. Cross post: http://forums.ni.com/t5/LabVIEW/Embed-Screen-of-a-remote-machine/m-p/2693305/highlight/true#M800368 As suggested there download the ActiveX components and look into Microsofts documentation. http://msdn.microsoft.com/en-us/library/aa383541(v=vs.85).aspx
  16. I fear he is trying to detect "Planes" by providing an image of a plane then seeing if the program can detect a plane in another picture.
  17. What have you tried? It sounds like some LabVIEW tutorials would go a long way. Here are some of the typical education links I share most are free. 3 Hour Introduction http://www.ni.com/white-paper/5243/en/ 6 Hour Introduction http://www.ni.com/white-paper/5241/en/ LabVEW Basics http://www.ni.com/gettingstarted/labviewbasics/ Self Paced training for students http://www.ni.com/academic/students/learn/ Self Paced training beginner to advanced, SSP Required http://sine.ni.com/myni/self-paced-training/app/main.xhtml LabVIEW Wiki on Training http://labviewwiki.org/LabVIEW_tutorial#External_Links
  18. I didn't realize this conversation had the scope limited to a button, and more so limited to the decal. This changes things slightly. Controls do not have multiple decals and a selection for an index (but sounds like an idea for the exchange maybe) One option is to use a Picture Ring. This contains multiple images and you can choose which to display. Instead of having a value change be triggered when the user clicks, you could capture a mouse down and change the picture ring to show the new image. A little clunky but it could work. There is a way to programatically replace a decal of a button, without the development environment. Here is a discussion on it. https://decibel.ni.com/content/thread/4901?tstart=30 This replaces the PNG images within the control and is not something NI would condone because it is messing with a file, whose structure is not documented, and could change at any time. This works well enough but again the .ctl file can't be inside the EXE. So you would need a way for the control to be loaded in the EXE from disk, and not from reference, or force the reference to be loaded from outside the EXE first. But then you will get warnings from the EXE saying control was loaded from a different location.
  19. 1) I've never known NI to post how the offset will drive with temperature. Only that it will remain within the specified tolerances, if it is within the temperature range that the card is rated for. 2) It appears this card also supports some kind of daily calibration I am not too familiar with it but there is a document on it. 3) Resolution can be found in the specifications for the card. It really depends on the selected range for the card. It is a 24bit resolution, but the range can vary between +/-42.4V and +/-316mV. This gives a resolution between 0.005mV and 0.00003mV. All this information can be found in the two documents. http://www.ni.com/pdf/manuals/373770j.pdf http://www.ni.com/pdf/manuals/371234b.pdf Which were both found at the product page under Resources. http://sine.ni.com/nips/cds/view/p/lang/en/nid/202236
  20. For this to work you will need a full LabVIEW development environment. The LabVIEW Run-Time engine does not have the features needed to edit or resave a VI and that is what would be needed. This means this feature cannot work in a EXE. So if you have a full LabVIEW install you could do this using LabVIEW scripting. Go to Tools >> Options then VI Server page and check Show VI Scripting functions. This will add scripting tools to your palette. With scripting you can programatically open a VI and replace the control with the one you want and resave it, then run it. I believe packed libraries are made to act as a step between source code and an EXE, where you have the VIs but they don't have block diagrams or front panels. I know there are options to remove these or leave them but I don't know the default. If your VI has no block diagram then you can't edit it and resave. So for this to work you will need LabVIEW development, scripting, and VIs that can be edited.
  21. Adding to the SCC discussion which seems to be skewing a bit from the original topic. Viewpoint Systems makes a SVN toolkit which is in VIPM. It allows to rename a VI and have it be reflected as a LabVIEW rename and SVN rename in one step. This works mostly well but I had some issues with renaming a VI that was read-only. This also only works well if all callers of that VI are already loaded into memory. Also the trick about moving multiple files on disk is one I was not familiar with.
  22. I've used PDFCreator in the past for generating PDFs using the print to PDF feature. I remember one particular version had a .NET API so that you could have a little more control. This also allowed it to work without having to change what the default printer on the system was.
  23. You didn't post your VI's just pictures of them, but from what I see I think things should work correctly. You have your "Select The Type of Test.vi" that has a UI and when you click "New Variant" it prompts the user for input, then calls the config VI. With the config VI's settings the way you showed in the first post with Show When Called, and Close When done, then the config VI should pop up and show the UI, then when the config VI is done running it should close and return you back to the "Select The Type of Test.vi" UI. Is this how you expect it to work? Does it not work this way?
  24. The 6008 will gather from 8 single ended analog inputs. Just not at the same picosecond. I suggest you read this: http://digital.ni.com/public.nsf/allkb/4F9D107D8B26233B86256F250057C9B3 and this: http://www.ni.com/white-paper/4105/en/ In almost all situations I'd say the muxing of the ADC is acceptable. It only doesn't work when you need to correlate two measurements of something that can change very quickly. Things like measuring 2 sine waves and wanting to know how they shift in time.
  25. Looking at your other post I'm not sure why it isn't working and I hope you can solve it. The only thing I wanted to add was that in the past when I needed to measure 3 phase high voltage I used the cDAQ platform because it was cheaper. But don't let me change your plans your hardware should work just fine. I used a cDAQ-9174 4 slot USB chassis, a 9225 to measure three phase differential voltage simultaneously, and a 9227 to measure three phase differential current simultaneously. This meant the measurement could be done in Windows then calculate things like power factor and phase offset and the voltage and current wouldn't need to be scaled down as long as voltage is 300VRMS or less and current was 5ARMS or less. Here is the white paper on it. NI did have a "Electrical Power Measurement" package which was a set of free VIs that took voltage and current and returned things like phase and power but have rolled it into the "Electrical Power Suite" which is no longer free but added a bunch of functionality.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.