Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Hi,

    Thank you for this useful library/API.

    You may want to change the representation of numeric indicators in "GetDiskFreeSpace.vi" from U-32 to U-64 as if the size of available free space is more than 4 GB, there will be overflow error.

    Apart from this, I've a doubt, when I opened the project, in the "Files View", it shows that the all dll files (dependencies) are in D:\ drive... but actually there is nothing in D:\ drive.. refer the picture (attached).

    Regards.

    I'll make a note and update the indicators- if/when I revisit them. As I say in the description, they were written in 1998 (many years before x64) and you are getting them "warts-'an-all".

    If you collapse the directory structure you will see that the windows DLLs are in the Root.. They are automagically added to the dependency tree by the project manager.

  2. Hi all,

    I have been developing some applications in LV 2010. I have a lot of backup files on our server . Suddenly all my VIs have been rolled back to very old version(date) of my application. It is opening the same old- (' template' )VI again and again no matter which file I open. This problem is driving me crazy. It is LV or some kind of Virus? Is there a solution for this?

    Please help...

    Sharon

    Probably the IT virus yes.gif. Speak to your IT dept and ask hem if thy have done a restore recently.

  3. The for loop will execute the number of times of the elements in the smallest indexing array regardless of what you wire to the N terminal. The N terminal is only of consequence if you have no array indexing or if the value you wire is less than the smallest indexing array.. I expect one of your data arrays is only 1 element in length.

    This, for example, will only execute 3 times because the shortest indexing array is 3 elements long.

  4. I think Eric has made one important point. the solution you are looking forward can give you better results if you CALCULATE the sun position instead of MEASURE the sun position.

    For calculating the position you will need to have GPS co-ordinates(you don't need to have GPS device, you can feed them as constants) of your location and time.

    Also consider following points when you consider vision based solutions Vs Time location based solution

    • Accuracy (~100% Vs 100%)
    • Cost of hardware
    • cost of software development
    • Cost of installation and commissioning
    • maintainance (need to clean camera lens/ enclosure glass, calibrations etc.)
    • etc.

    Also there is one more advantage of using time location based solution "It can even work in night";)

    see this: http://pvcdrom.pvedu...IGHT/SUNPOS.HTM

    also Google a bit and you will get all the required formulas.

    Naaah. 4 LDR's (2 per axis) driving the motors in proportion to the difference in intensity. Doesn't get simpler (or cheaper) than that and is self calibrating for lighting levels (ok. I'll give you the night , but the device will be pointing at the ground since its tracking the sun biggrin.gif)

    But the OP wants to use a camera with a pencil stuck through a piece of paper. So I would suggest a concentric rake which will give you the position (quadrant) and length in 1 measurement.

  5. I disagree with the statement as it is written, but I suspect we agree on the bigger picture.

    Possibly :)

    I think the developer should write unit tests for the code he has developed. (And I know this is common in some software dev environments.) As you said, it helps verify the 'positive' requirements have been met. Well written unit tests also help communicate the developer's intent to other programmers. The very act of writing tests puts me in a different mind set and helps me discover things I may have missed. Requiring at least a basic set of unit tests keeps the developer honest and can avoid wasting the test team's time on silly oversights.

    Perhaps it was worded ambiguously since I did not mean to imply that the developer should never write any code to verify his software. But that it should not be used as the formal testing process. Most developers want to develop "bug free"software and it's useful for them to automate common checks. But I am promoting that this is for the developer to have confidence in his code before proffering it for formal acceptance. The formal acceptance (testing) should be instigated by a 3rd party that designs the test from the documentation, and that reliance on the developers test harness for formal acceptance is erroneous for the previously stated reasons.

    However, that set of unit tests should not be blindly accepted as the complete set of unit tests that verifies all (unit testable) requirements have been met. When the component is complete and checked in, the test team takes ownership of the developer's unit tests and adds their own to create a complete unit test suite for that component. And of course, in a "good" software development process the developer never has the authority to approve the code for production. I'm pretty sure we agree on that.

    I think this is probably where we diverge

    My view is, "that" set of tests is irrelevant.. It is always the "customer" that designs the test (by customer I mean the next person in the deliverables chain - in your case, I think, production) The tests are derived from the documentation and it is the principle that you have two separate and independent thought processes checking the software. One thought process at the development level and - after RFA (release for acceptance) - one at the acceptance level. I think I should point out that when I'm talking about acceptance in this context, I just mean that a module or identifiable piece of code is marked as completed and ready to proceed past the next gate.

    If the the test harness that the developer produced is absorbed into the next level after the gate, then you lose the independence and and cross check. If it didn't pass the developers checks (whether he employs a test harness or visual inspection or whatever) then it wouldn't have been proffered for acceptance - the developer knows it passes his checks.

  6. Raw USB in LabvIEW is very "trixie". There is no real defined standard and the process for actually getting something usable is fraught with problems. If you need USB it's much better to go for one that supports a virtual serial interface. Raw USB in LV is (IMO) to be avoided at all costs.

    However.

    I think from what you re saying that you have read the NI tutorial (you talk about creating a driver in the wizard) and I will add the caveat that generally. a USB driver cannot exist side-by-side with VISA (i.e you must uninstall and completely remove the vendors driver).

    But your problem has been discussed here before. I'm not sure that a resolution was every found but here is the link in the hope it provides something useful.

    Thats about all I can offer, I'm afraid.

  7. The GetValueByPointer XNode is just a way to deal with native pointers, when you have no way or don't want to modify the DLL itself to deal with LabVIEW handles directly.

    I find it far too slow It's a shame it's an xnode.. It's password protected and I wanted to find out how it determines the length of a string before it dereferences (iterates 1 char at a time and checks for null?) They don't mention how to do that in the move block documentation (dereference a variable length string).

  8. Lastly, you need to consider if you will build a client server model where multiple UIs can interact with a singe engine simultaneously. That is a harder nut to crack...

    Naaaah. shifty.gif

    Dispatcher 1.0 does 90% of that. Dispatcher 1.2 does 100% (including to and from browsers after a very nice little thread a couple of weeks back thumbup1.gif )

  9. Final thought: In the past I've naively viewed unit testing a bit like a magic bullet. Turns out... not so much. It's good at catching certain kinds of bugs, such as when an interface is implemented incorrectly. Ultimately it will only catch bugs you're testing for, and if you've thought to test for them chances are you wrote the code correctly in the first place. Unit testing is only one part of a good test process. User-driven scripted tests (a written list of steps for the tester to do) and exploratory testing are valuable techniques too.

    Indeed. It is more risk management than a no-bugs solution. The mere fact that you are writing more code (for the purpose of test) means that even your test code will have bugs so you can consider that software, testing software, actually introduces the risk that you will expend effort to find a solution to a non-existent bug in the main code.

    Unit testing (white-box and black-box) has it's place. But it is only one method of a number that should be employed. Each to a greater or lesser extent. We mustn't forget systems testing which tests the interaction between modules and the fitness for purpose, rather than that an individual module actually does what it s designed to do.

    The main issue for any testing though is that the programmer that created the code under test "should" never be the person who tests, or writes any code that tests it. The programmer will always design a test with an emphasis on what the module is supposed to achieve, to prove that it meets the design criteria - that's his/her remit. Therefore the testing becomes weighted to proving the positive rather than the negative (relying on error guessing alone) whether it's a software testing solution or not. It's the negative (unanticipated) scenarios where the vast proportion of bugs lie and, to expect the programmer to reliably anticipate the exceptions when she/he is inherently focused on the operational aspects, is unrealistic and (probably) the biggest mistake most companies make.

  10. This is NOT A SCAM. We need to do something before an "Eruption of the Yellowstone Supervolcano Destroy the United States as We Know It". Is ALGOR available?

    Wouldn't bother me rolleyes.gif

    Although it's a bit worse than that because its an extinction level event. In that case It just means I'll die a few months after those in the UStongue.gif And those in Australia a little after that.

    But if things work out right it might counter Global Cooling when they decide thats the next money maker and companies are paid to pump CO2 into the atmosphere laugh.gif

  11. And here's the events version (stealing JCarmodys boolean logic wink.gif).

    The advantage of Jcarmdys is that it works anywhere on the screen whereas the events version only works on the FP of the VI that has the code. The events versions only advantage is that it is a little more efficient in terms of CPU.

  12. The really old way (before events, queues etc) might be easier to visualise.

    It used 2 global variables (data pools). The UI would write to one of the globals to configure and control the acquisition and all the acquisition stuff would write to the other to update the data in the UI. (Completely asynchronous. Non blocking and damned fast - not to mention built in system wide probe..lol) So the UI was completely decoupled from the acquisition spending most of its time just polling the UI global to update the screen.

    But basically all it means is removing execution dependency between the UI an other parts of the code usually via an intermediary interface. The inverse I would imagine would be something like a sequence structure with the acquisition in the first frame an the indicators in the last frame.

  13. This is going to be painful yes.gif. Not so much that you are re-factoring code (many of us do that all the time). But you are switching paradigms so it's going to be a complete rewrite and you won't be able to keep perfectly good, working and tested code modules (even the worst programs have some).

    But the good news is. There will still only be 1 person that understands the code only it won't be the other guy biggrin.gif

    I usually find one of the hardest initial steps is where to start. I strongly recommend you don't do it all in one go, but rather use an iterative approach. Identify encapsulated functionality (e.g a driver) and rewrite that but maintain the same interface to the rest of the code (to begin with). This way you will be able leverage existing test harnesses and, once that module is complete; still be able to run the program for systems tests. Then move to the next.

    At some point you will eventually run out of single nodal points and find that you need to modify the way the modules interact to enable to realise your new architecture. But by that point you have gotten over the initial learning curve and will be confident enough to make much riskier changes whilst still having a functioning application.

    The big bonus of approaching it this way is that you can stop at virtually any point if you run into project constraints (run out of time/budget, another project gets higher priority, you contract a serious girlfriend etc) and still have a functioning piece of software that meets the original requirements. You can put on the shelf to complete later but still sell it or move it into production or whatever you do with your software.

  14. I don't have LabVIEW 2010 installed so I can't open your VI, but I've found that you can only edit cells that already have text in them; you can't add text to new cells. You also have to click in exactly the right place at the right time in order to edit cell contents. I prefer to use a table control if the user needs to be able to edit values. If you want to be fancy about it, create a table indicator instead and add a few individual controls for different data types. When the user clicks on a cell in the table, get the cell location (the table control has a method for this). If the user is allowed to edit that cell, make the control appropriate to the datatype of that column visible and move it to the the cell that was clicked. When the user finishes editing the value, copy the control's value into the appropriate cell in the table and hide the individual control. It takes some work but you can build a very nice-looking interface that lets you mix and match datatypes in the same table, and also have some columns that can be edited and others that cannot. With more work you can have the tab key operate properly, moving from one column to the next. Here's an example of this from one of my applications; the first column is values from a Ring control that is populated at run-time, the second column is a floating-point value, and the third column is an enumeration. The remaining columns are calculated and populated as the user fills them.

    post-3989-0-37570700-1297704602_thumb.pn

    Thats what the MCL should be as a control (your picture) without us having to jump through hoops and use hacks to emulate proper controls.. It's about time NI stopped faffing with blue sky stuff and put more effort into the core stuff that's been needing development for the last 5 years that everybody uses (controls, events, installer, more integrated source control support (svn, mercurial) .... et al.)

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.