Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Everything posted by ShaunR

  1. On the surface, it looks to me like an anti-pattern (maybe in the minority.....again....lol). The beauty of events is that you can fire them and they don't use resources unless there is something listening for them. That is one of their main advantages over queues This just seems to be trying to circumvent that feature to find a use (a singular use from what I can gather) and it doesn't really add anything to mitigate the drawbacks of events or, indeed, offer anything that cannot be achieved in other ways with more transparent foot-shooting opportunities. Can you give a real-world example where you might use it?
  2. Indeed. In fact, there are very few browsers now that do not support them. I've gone off Chrome at the moment though. Nothing to do with the features or the browser itself (which is arguably the best). More to do with it being so nosy and by default trying to track everything you do and put all your private info on their servers (as I found with my contacts list one day). Still. Not as bad as the iPhone.
  3. Well. Consider you have a public function called "Calculate". This function ,amongst others, uses a "Check For Divide By Zero" private function. You can craft a test case that that can be applied to the public function that specifically supplies a calculation that will result (at some point) in a divide by zero. You are using your knowledge of the internal workings of the "Calculate" function to indirectly test the "Check For Divide By Zero" private function. This is "Grey-Box" testing. The major bonus to this approach is that your test case code can be generic (it only has to interface to the "Calculate" function) and just supply different calculations but test multiple paths through the private functions without having to create code for all-and-sundry. You can even do things like put thousands of your calculations in a file and just iterate through them, throwing them at the "Calculate" Function. The test code is not important, the tests data is what requires consideration and each piece of data can be crafted to test a path or target a specific private function within the public function. As an aside. The examples that ship with the SQLite API are, in fact, the test harnesses and provide 99% coverage of the API (not SQLite itself, by the way. that has it's own tests that the authors do). That is why the examples increase when there are new features
  4. Sweet! Any Android smartphone/Tablet (Like your Galaxy Tab)you can use Firefox, Safari, Azura, Opera Mini, Opera Mobile etc They all work with Websockets (as long as it is over a network rather than 3G). There is limited support in the Ice Cream 4.0 native browser but before that (Gingerbread etc) .... nope. iPhone/iPad uses Safari so that's not a problem. I'd be interested to find out more about the Bravia (what OS/browser etc). Sometimes websockets are supported but need enabling as they are off by default (like Opera). I think the issue with TVs will be purely down to being able to install apps if the native browser doesn't support them. So it looks like it's only the Sony Bravia that is the odd-one-out. I wonder also about the LG since most smart TV's have gone for either Linux or Android. Well. A bit more digging and it looks like the Bravia Smart TVs may be using Opera BRAVIA TV, Opera So it looks like that may be a go if you can enable it! N:B: I was mistaken earlier. Opera Mini doesn't support them but Opera Mobile does).
  5. Well. There is white-box, grey-box and black-box testing. Testing the public interfaces is generally black-box (test the exposed functions without knowledge of the internal workings against a spec). Testing individual methods is generally white-box (full factorial testing with detailed understanding of the internal workings of the function). Testing public methods with specially considered test cases, crafted to exercise internal paths within the functions is grey-box (and also requires detailed knowledge of the internal workings). Positive black-box (i.e. test for what it should do) and negative grey-box (i.e test for what it shouldn't do) together will always give you the best test-cases vs coverage than any amount of white-box testing. If you want to write 3 times as much code to test as the original code and have a 95% confidence. Then black+white is the way forward. If you want to write a fraction of the original code in test cases and have a 90% confidence, then black+grey is the way (and you won't need to solve the problem ).
  6. Another thing you can do is set the subsystems execution system to something like "Other1" then dynamically launch it. This will force it into a different thread pool and the LV scheduler should do the rest. It will also give you better control over how much slice time it can consume by setting the different priorities (this assumes your DLL supports multi-threading and doesn't require the dreaded orange node).
  7. Thanks Asbo. I just spat my coffee all over my keyboard
  8. Actually. There are more like 5 in total. Saphirs is fully commercial, mine is free for non-commercial and the others are free but generally have limited features and support on the various platforms (LabVIEW x64 for example). But all that is really for another thread (even though it is YOUR thread....lol).
  9. Because a fast, self contained database is a superb and robust solution to many of LabVIEWs applications.
  10. The commercial licencing is changing (check the site next week ). That won't stop you downloading and playing though.
  11. Yup. The SQLite API For LabVIEW comes with an example of decimating a waveform in real-time with zooming.
  12. Websockets are the technology and yes, both the Labsocket and the Websocket API For LabVIEW are 3rd party tools (although not the same - labsocket requires a STOMP server, I believe. Whereas the API is direct TCPIP so no server required). If you're interested, then I can PM you a link to the Websocket API live demo. You can then see if you can use it from a Sumsung Smart TV (because I would like to know too ).
  13. Apparently the Samsung 2012 models use a Maple browser which is Webkit based (Webkit supports websockets). There is a very brief comment of surprise on Google Groups that websockets are working so it could well be an option.
  14. Yeah. I wasn't very clear in pointing out the caveats (and wasn't worded very well either-been up for 36 hrs already. I shouldn't really post). using_events_and_subpanels
  15. Websockets are only an option if the Samsung Browser supports them (or you can put a browser on there that does).
  16. Can't think of any problems with what you have listed, but there are a couple that I can think of in terms of cons vs queues.. Cannot use them for subpanels (well, you can, but your events won't work). Cannot guarantee execution order (might not be a consideration for you). Cannot easily encapsulate (for the reason you mentioned about typdefs).
  17. Releasing a queue doesn't destroy a queue unless the reference count falls to zero, you wire true to the "Force Destroy" input or all VIs that have references to the queue go out of memory (too many bullets in the gun to shoot yourself in the foot with IMHO). The obtain only returns a reference to a queue, not the queue itself (which can cause memory leaks). This means you have to be very aware of how many "obtains" to "releases" there are in your code and, if you pass the queue ref around to other VIs, ensure that you don't release it too many times and make the reference invalid. Since the VI obtains a new reference and releases it on every call if there is already a reference (which makes it atomic and ensures there is always 1 and only one ref), you only need to get rid of the last reference once you are finished with the queue completely and don't have to worry about matching release to obtains (i.e. it protects you from the ref count inadvertently falling to zero, vis going out of memory and invalidating a queue reference or leakage). The flush is purely so that when you destroy the queue you can see what was left in the queue (if anything). The upside to this is that you can throw the queue VI in any VI without all the wires and only need to destroy it once when you are finished with it (or, as I have used it in the example, to clear the queue).
  18. Why is software so special? It is a tangible deliverable that can be measured and quantified. It's not as if it is like, say, a thought! After consultation to refine the requirements. Yes. I cannot believe you just wrote that. This is reductio ad absurdum argument. What is worse. Is that it is a reductio ad absurdum arguement based on an analogy Let me ask you this though. How much retail software only runs on Windows, or only on Android, or only on Mac? How much retail software written by anyone actually runs on all available operating systems? (environments). You could probably count those that run on more than one on your fingers (LabVIEW being one of them). Large software companies only do anything for one reason and that is to reduce costs and make profit. I would wager very few (if any) companies hire for special roles of public beta tester cordinator, it is usually just an extension of an existing employees role-managers are easy to come by and large companies are full of them. The same goes for IT. So of course they exploit a free resource. They'd be stupid (in business terms) not exploit someones offer to spend time and effort in testing for no free when they would have to spend considerable amount on employing a department to do it.. I don't subscribe to the "software is special and harder than quantum mechanics" school of thinking. I happen to think it is one of the easier disciplines with much less of the "discipline". If you are doing full factorial testing on different PCs then you a) don't have much confidence in your toolchains, b) don't have much confidence in your engineers and c) expecting to produce crap.
  19. Well. "Agile development" is more of a state of mind than a process. In the same respect as TQM, it encompasses many "methods". However. Lets not get bogged down on semantics. I stated I use an iterative method which, simply put, consists of short cycles of requirements, development, test and release-to-quality which runs in parallel with another another iterative cycle (verification, test, release-to-customer/production). There's more to it than that. But that's the meat of it. The release to customer/production are phased releases so whilst all planned features are fully functional and tested. They are not the entire feature set. I will also add that "release-to-customer/production" doesn't necessarily mean that he gets the code. Only that it is available to inspect/test/review at those milestones if they so choose. With this in mind. When I talk about alpha and beta testing. They are the two "tests" in each of these iterative processes. So the customer (who may or may not be internal e.g. another department) gets every opportunity to provide feedback throughout the lifecycle. Just at points where we know it works. We don't rely on them to find bugs for us. Feedback from customers is that they are very happy with the process. The .release-to-customer/production appear on their waterfall charts (they love M$ project ) as milestones so they can easily track progress and have defined visit dates when either we go to them or they come to us. They also have clearly defined and demonstrable features for those dates. With this analogy. The alpha would be a PSU on a breadboard and the beta the first batch PCB that had been soldered a hundred times with no chassis. That's what prototypes in the real world are and that's exactly the same for software. The first production run would be where a customer might get a play. More semantics. When a customer asks you "what can you make for me" he is asking for product. I think actually we agree. I don't propose to just drop it in their lap 6 months after they asked for it. Just that when the job is done. It is done to the extent that he doesn't need to come back to me. Doesn't need to phone, send emails, texts or carrier pigeons except to offer me another project because the last one went so well. Just don't put water in it or you may find it an electrifying experience Not always written. A lot of the time it is just meetings and I have to write the requirements then get them to agree to them. Amazing some of the u-turns people make when they see what they've asked for in black and white. Or, more specifically, what that glib request will cost them
  20. Nope. It's software. It's comparing apples to apples. The expectation of bugs in software is exactly what I was saying about being "trained". There is no excuse for buggy software apart from that not enough time and resources have been spent on eliminating them. That is the reason why there are faster turnarounds in software, because people will accept defective software whereas they will not accept defective hardware. Indeed. However, that can be in parallel with the development cycle where the customers frame of reference is a solid, stable release (rather than a half-arsed attempt at one). If you operate an iterative life-cycle (or Agile as the youngsters call it ) then that feedback is factored in at the appropriate stages. The "Alpha" and "Beta" are not cycles in this case, but stages within the iteration that are prerequisites for release gates. (RFQ, RFP et. al.) Well. If it takes three years to fix something like that, then it's time for a career change! If you know there is a problem, then you can fix it. The problem is that it takes time/money to fix things and the culture that has grown up around software is that if they think they can get away with selling it (and it being accepted); then they will. If it wasn't acceptable they'd just fix it. I've never been an iPhone user (always had androids). However. If the apps are written by any script kiddy with a compiler and given away free, then, basically, you you get what you pay for. The phrase "Good enough" is exactly what I'm talking about. Good enough to get away with? Sky scrapers aren't perfect, but they are "fit for purpose", "obtain an level of quality", "comply with relevant standards", "fulfill the requirements" and don't fall down when you slam a door . The same cannot be said for a lot of software once released, let alone during alpha or beta testing. What's the difference? A balance between what? Between quality and less quality? Software is software. The only difference is how much time a company is willing to spend on it. Please don't get me wrong. I'm not commenting on your products. Just the general software mentality that, at times, makes me despair. I've no idea what that means It's not a "goal".It is a "procedure" and the general plan works in all industries from food production to fighter jets. Why should software be any different? The "goal" is, in fact, that the first time they see it IS the last time they see me (with the exception of the restaurant or bar of course). That means the software works and I can move on to the next project. And I'm saying it doesn't matter what the industry or technology is. It matters very much who the client is and, by extension, what software companies can get away with supplying to the client. Public Alpha and Beta testing programmes (I'll make that discrimination for clarity in comparison with the aforementioned development cycles) are not a "tool" and are peculiar to software and software alone. They exist only to mitigate cost by exploiting free resource. God help us the day we see the "Beta" kettle.
  21. Alpha and beta testing (for me) is internal only. Alpha testing will be feature complete and given to other programmers/engineers to break it on an informal basis. For beta testing, it will be given to test engineers and quality engineers for qualification (RFQ). Sometimes it will also be given to field engineers for demos or troubleshooting clients equipment. Clients will never see the product until it is versioned and released.
  22. Well. I deal with automation machines and if it doesn't work there are penalty clauses. The clients are only interested in what day it will go into production and plan infrastructure, production schedules and throughput based on them being 100% operational on day 1. Deficient software is something desktop users have gotten used to and have been trained to expect. Software is the only discipline where it is expected to be crap when new. Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.