Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/18/2015 in all areas

  1. For a while now I've been mulling over a gap in what I see as software in general. This has nothing to do with LabVIEW, per se, but it is the reason we need CLAs and System Engineers to translate what the customer wants into what the customer gets. A good example of this is the CLA exam. There, we have a well written, detailed requirements specification and a human has to translate that into some "stuff" that another engineer will then code. So why do we need an engineer to translate what amounts to pseudo code in to LabVIEW code? Maybe 10-15 years ago (before scripting was a twinkle in the milkman's eye), I had a tool that would scan word documents and output a text file with function names, parameters and comments and this would be the basis for the detailed design specification. I would create requirements for the customer with meetings and conversations and generate a requirements specification that they could sign off in Microsoft Word. Unbeknownst to the customer, it had some rather precise formatting and terminology. It required prose such as "boolean control" and "Enumerated Indicator" It also had bold and italic items that had specific meaning - bold was a control/indicator name. Italic was a function or state . It was basically pseudo code with compiler directives hidden in the text. Roll forward a few years and people were fastidious about getting CLD and CLA status. Not being one of those I looked at the CLD exam and saw that a big proportion of the scoring was non functional. By that I mean making sure hints and descriptions are filled in etc - you know, the stuff we don't actually do in real life. So I wrote a script that read the exam paper (after exporting to text), pulled out all the descriptions and filled in all the hints, labels and descriptions. It would probably take 5-10 minutes recreating it in an exam but ensure 100% of the score for that part of the test (this later became Passa Mak, by the way). So that got me thinking, once again, about the CLA exam and the gap in technology between specification and code. I have a script that takes a text file and modifies some properties and methods. It's not a great leap to actually scripting the "stuff" instead of modifying its properties. I don't have the Word code anymore, but should be able to recreate it and instead of spitting out functions, I could actually script the code. We could compile a requirements specification! If not to a fully working program, at least so that an engineer could code the details. Doesn't that describe the CLA exam? So I looked at an example CLA exam. Woohoo. Precise formatting already .......to be continued.
    3 points
  2. Hi Thomas, considering where you are at right now, I would recommend implementing an automated test by starting at system level because creating Unit Tests for a significant subset of your 5000 VIs will take a lot of time. After creating the system level tests, then I would look into getting more test cases for each individual part of your application. For the implementation, we have used UTF in the past but we usually create our own VIs for system level tests as the setup can be quite complex and we do not feel that UTF offers us much of a benefit at that level. What we do is add a script ran by the OS (in your case you would have a slightly different script running on three PCs or likely 3VMs <- VMs allow you to share HW resources more easily...) every night that: 1 - Updates SVN folders that are part of the test 2 - Calls an VI in an executable format (that you could compile independently on each platform) and which performs the system level test while monitoring RAM & CPU usage in the background. 3 - Call another script/application to parse all the result files and create a separate report including graphs of the logged data overlayed on top of known good results. We do this for the values that are harder to analyze automatically but for which a human eye can analyze in a glance the following morning. Since our error handling includes a FGV holding all the errors/Warnings generated, we can easily include those in the reports along with all other results and be part of our P/F criteria. One thing to keep in mind is that the task often looks daunting but you have to start somewhere. I find it a lot easier to start with a limited scope which grows a lot over time. With this incremental process, creating your first set of tests is more manageable and you can add more test cases as problems are discovered and you want to make sure to cover those in future releases. When implementing the automated testing after the facts, we usually begin creating unit tests as we find the most critical sections that can break, and as we make modifications to existing code. All new code changes should include implementing proper unit test for the given module. Hope this helps.
    1 point
  3. The problem is the people customers. You must know that a requirements document, or rather a text document from a customer that very loosely describes what they want the software to do, is going to vary in format, wording, and technical level from customer to customer. I've seen plenty of documents that were supposed to describe the software a customer wanted, but was more of a stream of consciousness, describing what they wanted, including but not limited to things like "the operator won't get bored using the software". Good luck getting them to use a word like Boolean, or Enum. This is hardly pseudo code, and requires some amount of magic and hand waving when it comes to bidding on project with this type of specifications. For me the real meat of what needs to happen is a sit down conversation with the end user, asking what they want it to do. Flush out what they really need, and what they want. Understand priorities, and try to think of all the pit falls, technical limitations, and dead locks where they ask for something in one place and contradict it elsewhere. I'm not saying it can't be improved, but I'm trying to say in my world why so much effort is put in translating this document to an output like software. That being said I agree that you can fail CLD/CLA simply based on not understanding what it wants. I remember hearing someone take the CLD coffee maker exam having never drank coffee, or knowing what it was. If it were me in real life I'd sat down with the customer and discuss what they really want and how they want it to work. I do.
    1 point
  4. NI, in it's generosity, has made two toolkits I suddenly have a need for -- free. Unfortunately, they are only free in the 2015 version of LabVIEW. You still have to pay for the old versions (I'm using LV2013). As a result of this, I'm pondering actually... it's hard to even type it... upgrade to the actual current version. This flies in the face of many years of (sometimes justified) paranoia regarding not at least waiting for the SP1 version of a new release of LV. However, if I wait until February (historical SP release time), that puts me smack dab in the middle of a bunch of Big Tests, whereas if I do it now, I actually have a couple months of relative peace and quiet to make sure my 3500+ vis are all still working correctly. So, those of you who have already upgraded to LV2015, how's it going? Any problems/issues/etc.? Cat
    1 point
  5. I'm doing all new development in 2015. It has been stable and trouble-free so far (since it came out). Every time I go back to a previous version now (LV2009-), I Ctrl+Alt+Drag to tidy things up and expect to see the diagram move...What, arrgh...bring me back to 2015!
    1 point
  6. The all users start menu on Windows 7 is in this folder: C:\ProgramData\Microsoft\Windows\Start Menu If you open the start menu, then right click All Programs, you get a menu for opening this folder. I've never had to do this but can an NI installer make a shortcut in this place?
    1 point
  7. So I haven't had any new projects use 2015. I installed it, and the only real development I've done is updating reuse, confirming functionality, and doing some EXE and installer builds for applications that are pure LabVIEW. Calling .NET and system DLLs are the most interesting things these EXEs do. So it might be too early for me to say for sure, but I've had no issues with my limited use. To be honest 2014, and 2013 SP0 has been pretty good. It probably is obvious but NI has been focusing on stability. I guess what I'm saying is if you were forced to use 2014 or 2013 SP0 when it first came out I don't think you'd had any real issues. I knew of some toolkits going free in 2013 (maybe it was 2014), I think report generation and PID, what ones were added in 2015? EDIT: okay here is a bit more information on the 2014 release notes: The LabVIEW 2014 Full and Professional Development Systems include all of the functionality of the LabVIEW PID and Fuzzy Logic Toolkit except the PID (FPGA) Express VI, which is part of the LabVIEW 2014 FPGA Module. LabVIEW 2014 Professional Development System now includes the following toolkits: – LabVIEW Database Connectivity Toolkit – LabVIEW Desktop Execution Trace Toolkit – LabVIEW Report Generation Toolkit – LabVIEW Unit Test Framework Toolkit – LabVIEW VI Analyzer Toolkit The following toolkit consolidations also provide additional functionality: • LabVIEW 2014 Digital Filter Design Toolkit includes the LabVIEW Adaptive Filter Toolkit. • LabVIEW 2014 Control Design and Simulation Module and LabVIEW 2014 Advanced Signal Processing Toolkit include LabVIEW System Identification Toolkit. • The LabVIEW 2014 FPGA Module includes the FPGA Compile Farm Toolkit, which is now known as the FPGA Compile Farm Server, and the FPGA IP Builder. • The LabVIEW 2014 Real-Time Module includes the Real-Time Trace Viewer. LabVIEW 2014 Release Notes Page 19
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.