Jump to content

JoeQ

Members
  • Posts

    70
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by JoeQ

  1. How is the tablet holding up. I tried running LV on one. It can read the tablets GPS sensor but everything else comes from the USB. https://www.youtube.com/watch?v=6PpAJeAurr4
  2. Making phase noise measurments the old way using Labview 6. https://www.youtube.com/watch?v=GP6B_ImnoII
  3. Article that mentions the Labview toolkit. http://evaluationengineering.com/articles/201208/the-uncertainty-about-jitter.php
  4. LeCroy JTA2 vs Labview..... https://www.youtube.com/watch?v=Bg45FuoeHZk
  5. Video showing a 1970's VNA being run with Labview. This equipment has no CPU and everything had to be calculated by hand. Having the PC crunch the numbers makes this old antique easy to use. Labview starts about half way into it....
  6. Actually I agree with you. There is no way I could do in 6.1i that I can in 2014. I typically only use Labview to automate some sort of test, maybe collect and display some data. In a few rare cases I have used it to replace writting a full on app. It's been a great tool for engineering work. I can get a lot done in short time. I do not typically reuse my code. Most of it is fairly simple and the tests are unique enough there is little to gain. Any features they add that attempt to "help me" draw go to waste on me. I know where I want things and how I want them wired. If they want to help me in this area, make it fast and stable. I would take 10 bug fixes of my choice over an autogrow feature! No one will say Labview has not improved over the years but some areas really lack luster for the time it has been on the market. You mention the UI. I was amazed when I found out that the simple edit commands that Windows supports were not supported in the tables, list boxes, or any other function I looked it. For me, I just use the basics when it comes to the UI. I'm sure you have seen some of my panels. 3D graphs are about as fancy as I get. Any time I have tried to get too fancy all I do is waste a lot of time. If I had one UI feature that I could add it would be solve the font problem once and for all! I don't care how. Make your own custom Labview fonts for all I care. Just make it a standard. As impressed as I am with how advanced the majority of the libraries have become, I am still amazed just how limited the Ethernet support is and how much I have to make direct calls to Winsock. If I were stuck with 2011 I would have been fine with it except for that free jitter tool kit is just so slick.
  7. I must not be the average Labview user. I would guess there are less than 10 features that I use that were added from 6.1i to 2014! Believe me, I cringe every time we decide to upgrade because I know some library will be broke or they will have dropped support for some NI hardware I use. Problems can be something as simple as the serial ports no longer work (and still don't because we all know how hard it is to use Windows to talk to a serial port) to something like the cPCI bus no longer works (which after MANY hours on my part and narrowing it to a single file they were able to find the problem). It takes me a fair amount of time to qualify a new version when it comes out. I will say that at least moving from 2011 to 2014 did not imped my efforts other than the time to install it. That said or if you read my previous posts you may have the idea I can't see any value to using Labview. This is certainly not the case at all. It has saved me countless hours over the years. It has also caused me to have a few more gray hairs. Add to the list, they include the Jitter Analysis Toolkit for free (and it works)!
  8. https://www.youtube.com/watch?v=4rJcEVj8OYo&feature=youtu.be Video showing some of the features of the Jitter Analysis Toolkit.
  9. If I understand the data you show, 2011 is still working even with the 2014 installed. Based on this, I do not believe a clean PC is going to change anything. What does the profile tool show?
  10. I was guessing it would be the low level interface but sounds like this is not the case. Something in how you handle the CAN communications on your side. If I understand your graphs, you suggest you are using 20% with the CAN and without you are at 2%. CAN is a VERY slow bus, 500k maybe a 1meg? To see 20% I suspect you have the code structured so you stall out when waiting for CAN data.
  11. Not too surprised. Again, are 2011 and 2014 installed on the same PC for both tests?
  12. I am curious when you ran the test, did you have just 2011 installed for the first part, then install 2014 to run the second part? Or was 2014 installed, then you ran both tests. Reason I ask is I am curious if this was something in the way they talk with your specific hardware. If they can mess up something as simple as a RS232 port, it's no impossible that they messed up your CAN interface. It will be interesting to hear what you find was the cause.
  13. I loaded the project I last worked on into 2011 and then 2014 while recording the CPU usage. Both use about 25% of the processing power. I really have not seen anything slow down or take more processing power. Wonder what you are doing that you see this sort of change.
  14. Playing with an open source C code generator to build the model for my simulator.
  15. Sub 1Mbit then. This should be no problem for storage. If the boards are able to put out data in say a raw 24-bit mode, it would be better store it in that format and post process them to double. For your speeds, doesn't matter. Typically I don't care about the GUI when I am collecting data. Maybe I just need some sort of sanity check. So I may take a segment of data, say 1000 samples, and do a min/max on it. Then just use the two data points that will be sent to the GUI. I don't like to throw out data or average as it is just too misleading when looking at the data. Again, think about how a peak detect on a DSO works. I may for example change the sample size for the min/max as the user zooms in. This min/max data path is normally separate from the rest of the data collection. If the GUI stalls for a half second it may not be a problem but normally, missing collected data is not something I can have. For your rates, you should be able to do most anything and get away with it. System I mentioned previous has about 500X higher sustained data rate and can't drop data. It's a little tight but workable without any special hardware. I still use that old Microsoft CPU Stress program to stress my Labview apps. Launch a few instances of it and you can get a good idea how your design is going to hold up.
  16. Before getting into to much detail, you need to provide what sort of data rate you need. If you are needing to record two 8-bit channels at 200 hz? Or a hundred 24-bit channels at 200 Hz? There are big differences in drive write speeds. One of the faster systems I played with used two FLASH drives configured with a Raid 0. Other things, like what else the PC is running may come into play. I have had to use compression to overcome problems with write times. To give you an idea of the amount of data that can be captured, the system I am currently working on can fill a Tbyte drive in an evening. In this instance there is no compression and I'm using mechanical non RAID storage. All the code is in Labview. Normally, I will have two separate data paths. One for storage, other for the display. Typically I will do something similar to a peak detect on a scope for the display data. Lots of ways to display the data and really depends on your requirements.
  17. https://www.youtube.com/watch?v=6PpAJeAurr4&feature=youtu.be Video showing a home made simulator I am working on.
  18. JoeQ

    CUDA

    Good points. When looking at the memory for example, I did not see a way to define if I wanted it to be shared, pinned, etc. I didn't see a way to move data from pinned to shared. It seems like all I can define is the type and size. Maybe it is all done for you but hard to belive they could get any proformance this way. No source, so can't really say what they are doing. It would be interesting to do some benchmarks using only the libs they have made available.
  19. JoeQ

    CUDA

    The 2014 installation took about three and a half hours but went smooth. Adding the CUDA toolkit was simple enough to do as well. There was only seven days left on the evaluation but NI allowed me to extend it to 45 days. I started out running various programs with the 2014. I did not see a whole lot of difference between it and 2011. It appears to run about the same speed and editing seems about as fast. Serial ports are still broke so I am not expecting any big bug fixes. The new tan/brown icon stands out. A friend noticed there was no longer a sequence displayed in the icon and that alone was worth the upgrade. On the plus side it did not appear that they broke anything major that would prevent me from using it. I can't always say that. I brought up the GPU examples. There are four of them. The first just reads the information from the board. They show an FFT and some heat equation solver. If you load the solver example and display the hierarchy, you get a feel just how complex it is. Pushing into the program, they lock you out of viewing the source without having a license. IMO, this is the whole point of the evaluation is to see if they offer something that could be used. I would expect to be able to code something up with the trial version. Another thing I do not see in the demo (you can't develop code, it's a demo not a trial) is some sort of benchmark. I would have expected to see some different algorithms coded in native Labview, C, maybe a threaded C and then their CUDA code. Even if they locked you out of the CUDA, at least you could get an idea on performance gains between them. The source code to read the boards information and other simple examples are included in the Nvidia's CUDA development tools. Microsoft offers an express version of the visual studio. Both of these are free. Making calls to a DLL is no big deal with Labview so I am still at a loss as to what this $1000 tool kit is getting you. Does it somehow help you develop code for the GPU faster? Is the code they come up with better than what you could code in C? What are they hiding with their locked VI functions?
  20. JoeQ

    CUDA

    It took a few days to get the license server setup for 2014. My plan is to evaluate all of the latest tools. LV2014 is going in now. My hope is that they have something like this, above and beyond an FFT. I would actually like to see something that would convert the Labview code to CUDA, then call the compiler for you. It would seem the writing the code for the optimal performance using a GPU could be quite complex and I am not sure how they would go about this. Just how to best partition the design could be a problem. Looking forward to seeing what that $1000 package is.
  21. JoeQ

    CUDA

    I downloaded it from home selecting the 2013 version. However what it sent was the 2014 version. I did scan both the downloader and the installer for viruses and it did not find anything. Are the older versions archived some place where I can download them? I looked through their FTP site and could not find them.
  22. JoeQ

    CUDA

    Using the link you provided, I attempted to download the 32-bit version for 2012 and it fails. So I tried the 2013 version and answered a couple of questions. Then was blocked by our system... I'll try it from home and let you know how it works out. Gateway Anti-Virus Alert This request is blocked by the Firewall Gateway Anti-Virus Service. Name: MalAgent.H_1081 (Trojan)
  23. JoeQ

    CUDA

    I recently had a need to use a GPU and saw NI has a $1000 package for Labview. Looking at it, am I missing something or is there more to the library than just a wrapper for CUDA? I ended up just doing the code in C and making the call from Labview to my DLL. Still curious what you get for that $1000?
  24. That's not too bad (450mS). On my newer PC it was taking wall over a second. Before dropping the property and using the color box, I removed the selected cell background color and then only update the changed cells (basically remove as many property calls as possible). You are right, this was better, until you edited large blocks of cells. The array of color boxes was much faster but again, I am not sure how to make it work with the properties that are available across all PCs.
  25. Looks like if you set the table to transparent, then place an array of color boxes under the table you can get some fair performance. I could not find a property for the color box size and if you set it manually, it appears to be off by 1 pixel in both directions compared with the table size. I was thinking for the best perfromance, you may need to overlay the array of color boxes with a matrix and a table. Use the table to get the editor to sort of work for data selection, use the matrix to display the data, then the color box to add the color. A total nightmare, but seems like this may be the best way to get a table working using the standard libs. I am not sure how you would get the whole mess to align on all PC combos.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.