-
Posts
4,940 -
Joined
-
Days Won
306
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
1. Sony Bravia SmartTV - Websocket not supported
2. Samsung SmartTV - Websocket supported!!
3. Samsung Galaxy Tab - Websocket not supported
4. iphone - Websocket supported
Sweet!
Any Android smartphone/Tablet (Like your Galaxy Tab)you can use Firefox, Safari, Azura, Opera Mini, Opera Mobile etc They all work with Websockets (as long as it is over a network rather than 3G). There is limited support in the Ice Cream 4.0 native browser but before that (Gingerbread etc) .... nope. iPhone/iPad uses Safari so that's not a problem. I'd be interested to find out more about the Bravia (what OS/browser etc). Sometimes websockets are supported but need enabling as they are off by default (like Opera). I think the issue with TVs will be purely down to being able to install apps if the native browser doesn't support them.
So it looks like it's only the Sony Bravia that is the odd-one-out. I wonder also about the LG since most smart TV's have gone for either Linux or Android.
So it looks like it's only the Sony Bravia that is the odd-one-out.
Well. A bit more digging and it looks like the Bravia Smart TVs may be using Opera
So it looks like that may be a go if you can enable it!
N:B: I was mistaken earlier. Opera Mini doesn't support them but Opera Mobile does).
-
Well. There is white-box, grey-box and black-box testing.
Testing the public interfaces is generally black-box (test the exposed functions without knowledge of the internal workings against a spec). Testing individual methods is generally white-box (full factorial testing with detailed understanding of the internal workings of the function). Testing public methods with specially considered test cases, crafted to exercise internal paths within the functions is grey-box (and also requires detailed knowledge of the internal workings).
Positive black-box (i.e. test for what it should do) and negative grey-box (i.e test for what it shouldn't do) together will always give you the best test-cases vs coverage than any amount of white-box testing.
If you want to write 3 times as much code to test as the original code and have a 95% confidence. Then black+white is the way forward. If you want to write a fraction of the original code in test cases and have a 90% confidence, then black+grey is the way (and you won't need to solve the problem
).
-
Another thing you can do is set the subsystems execution system to something like "Other1" then dynamically launch it. This will force it into a different thread pool and the LV scheduler should do the rest. It will also give you better control over how much slice time it can consume by setting the different priorities (this assumes your DLL supports multi-threading and doesn't require the dreaded orange node).
-
I'm not usually one for personal advice, but you might want to see a doctor about that.
Thanks Asbo.
I just spat my coffee all over my keyboard
-
Actually. There are more like 5 in total. Saphirs is fully commercial, mine is free for non-commercial and the others are free but generally have limited features and support on the various platforms (LabVIEW x64 for example). But all that is really for another thread (even though it is YOUR thread....lol).
-
Good grief, why are there three?!
Because a fast, self contained database is a superb and robust solution to many of LabVIEWs applications.
-
It carries a noncommercial license. How do I get a commercial license? None of the work I do is noncommercial.
The commercial licencing is changing (check the site next week
). That won't stop you downloading and playing though.
-
Yup. The SQLite API For LabVIEW comes with an example of decimating a waveform in real-time with zooming.
-
1
-
-
Hi Shaun,
I am definitely interested. Actually I'm trying to arrange to test this with one of the my client's distributor for Samsung SmartTV but have to wait until we are ready. If you have something ready, I could request for the test date to be earlier. So, do PM me
Thank a lot!
OK. Sent. Have fun and let me know how you get on.
-
By the way, websocket is a third-party toolkit that I need to buy, am I right? Is websocket the same as labsocket?
Websockets are the technology and yes, both the Labsocket and the Websocket API For LabVIEW are 3rd party tools (although not the same - labsocket requires a STOMP server, I believe. Whereas the API is direct TCPIP so no server required).
If you're interested, then I can PM you a link to the Websocket API live demo. You can then see if you can use it from a Sumsung Smart TV (because I would like to know too
).
-
Apparently the Samsung 2012 models use a Maple browser which is Webkit based (Webkit supports websockets). There is a very brief comment of surprise on Google Groups that websockets are working so it could well be an option.
-
I may not understand the architecture in question. As I understand it, if you have a subVI Front Panel in a subpanel of the main VI, then clicks on that (sub)Front Panel trigger events in the subVI’s event structure. And Alex is going to communicate this to the main VI via a queue, with the main VI talking back via a User Event. Should all work fine.
Yeah. I wasn't very clear in pointing out the caveats (and wasn't worded very well either-been up for 36 hrs already. I shouldn't really post).
-
Which events don’t work in subpanels?
The event structure. Events are handled in the owning vi.
-
Websockets are definitely an option, as is building your VIs into RESTful web services using LV.
Websockets are only an option if the Samsung Browser supports them (or you can put a browser on there that does).
-
Can't think of any problems with what you have listed, but there are a couple that I can think of in terms of cons vs queues..
Cannot use them for subpanels (well, you can, but your events won't work).
Cannot guarantee execution order (might not be a consideration for you).
Cannot easily encapsulate (for the reason you mentioned about typdefs).
-
Hi Guys,
I'm about to start a SCADA-like project where I need to publish some of the results to a few Smart TVs connected to our cRIO via LAN. The current choice now is the ones made by Samsung. I did some research on how to show the results on the Smart TV and believe there are only two possible methods in doing this, which are either using the remote panel, or WebUI. After getting more detail requirements, I think I cannot use WebUI since the GUI provided for WebUI is a bit primitive - there are certain types of customized graph/chart that cannot be done in WebUI. Now, I am left only with remote panel. I've heard 'mix responses' on using remote panel for web-based access via PC and was told to expect a lot more issues and headache when trying to do this on a Smart TV.
Has anybody work or done something similar? or perhaps, can this be done in the first place?
Please advice. Thanks.
Shazlan
-
ShaunR:In your latest example, you have a case structure named "Destroy" on your "Queue.vi". In there you flush the queue and then destroy it. Is this normal when working with queues? Its just that majority of the examples I've seen with queues, they just seem to release the queue in the end.
Kas
Releasing a queue doesn't destroy a queue unless the reference count falls to zero, you wire true to the "Force Destroy" input or all VIs that have references to the queue go out of memory (too many bullets in the gun to shoot yourself in the foot with IMHO). The obtain only returns a reference to a queue, not the queue itself (which can cause memory leaks). This means you have to be very aware of how many "obtains" to "releases" there are in your code and, if you pass the queue ref around to other VIs, ensure that you don't release it too many times and make the reference invalid. Since the VI obtains a new reference and releases it on every call if there is already a reference (which makes it atomic and ensures there is always 1 and only one ref), you only need to get rid of the last reference once you are finished with the queue completely and don't have to worry about matching release to obtains (i.e. it protects you from the ref count inadvertently falling to zero, vis going out of memory and invalidating a queue reference or leakage). The flush is purely so that when you destroy the queue you can see what was left in the queue (if anything). The upside to this is that you can throw the queue VI in any VI without all the wires and only need to destroy it once when you are finished with it (or, as I have used it in the example, to clear the queue).
-
I've never thought comparing software products with hardware or physical products was particularly relevant.
Why is software so special? It is a tangible deliverable that can be measured and quantified. It's not as if it is like, say, a thought!
Retail software products are nothing like skyscrapers. Imagine if I said to an architect, "I want you to design me a new skyscraper I can build anywhere, but I don't know where I'm going to put it or what environmental conditions it will be subjected to." Would you expect the architect to be successful?
After consultation to refine the requirements. Yes.
If I built one copy of the skyscraper in Antartica and another copy in the Sahara, would they both function equally well, or would there be defects requiring changes to the design? What if I built one in an earthquake zone, or a tsunami zone, or on a raft in the ocean, or on the moon? The desktop environment in which retail software products exist is much less constrained than any architect has to worry about.
I cannot believe you just wrote that.
This is reductio ad absurdum argument. What is worse. Is that it is a reductio ad absurdum arguement based on an analogy
Let me ask you this though. How much retail software only runs on Windows, or only on Android, or only on Mac? How much retail software written by anyone actually runs on all available operating systems? (environments). You could probably count those that run on more than one on your fingers (LabVIEW being one of them).
Some companies may do public betas for that reason, but it's not universal in retail software and I really doubt any large software companies do it for that reason. (Except maybe Google with GMail and GoogleDocs) First of all, public beta testing isn't free. You need to hire beta coordinators to manage interactions with the beta testers, assemble feedback, nail down repro cases, distribute the information to the developers, build infrastructures for getting the software to customers, etc. True, beta testers typically do not get paid (though sometimes they get other forms of compensation,) but that very different than claiming beta testing exists to exploit a free resource.
Large software companies only do anything for one reason and that is to reduce costs and make profit. I would wager very few (if any) companies hire for special roles of public beta tester cordinator, it is usually just an extension of an existing employees role-managers are easy to come by and large companies are full of them. The same goes for IT. So of course they exploit a free resource. They'd be stupid (in business terms) not exploit someones offer to spend time and effort in testing for no free when they would have to spend considerable amount on employing a department to do it..
Second, in my experience, public beta testing usually results in relatively few new bugs being filed. There were typically at least some new bugs, but not nearly as many as you would expect. Public beta testing of retail boxed software is not a very effective or efficient way to find new bugs. If companies were just interested in saving money they would skip public beta testing altogether. Why do they do it? Depends on what kind of software it is. Sometimes game companies will release betas to test play balance. All the betas I've been part of like to get usability information from customers.
They also specifically use beta testing as a way to check how the software works in a wide range of pc configurations. (Equivalent to putting the skyscraper in the Sahara or on the moon.) There is no way any software company can gather the resources required to test their software on all possible pc configurations. When I was at MS Hardware they had a testing lab that contained (to the best of my recollection) ~50 common computers for software testing. Some were prebuilts from Dell or other vendors, others were home built using popular hardware components. Between all the different hardware combinations and the various driver versions for each piece of hardware we still knew we were only covering a very small slice of the possible configurations.
I don't subscribe to the "software is special and harder than quantum mechanics" school of thinking. I happen to think it is one of the easier disciplines with much less of the "discipline". If you are doing full factorial testing on different PCs then you a) don't have much confidence in your toolchains, b) don't have much confidence in your engineers and c) expecting to produce crap.
-
I like to think that the Agile process (not that I've ever seen anyone implement a true Agile process FWIW, but there are plenty of people who will tell you that they do
) and a vee model with added alpha/beta stages tracks relatively closely.
Well. "Agile development" is more of a state of mind than a process. In the same respect as TQM, it encompasses many "methods". However. Lets not get bogged down on semantics. I stated I use an iterative method which, simply put, consists of short cycles of requirements, development, test and release-to-quality which runs in parallel with another another iterative cycle (verification, test, release-to-customer/production). There's more to it than that. But that's the meat of it. The release to customer/production are phased releases so whilst all planned features are fully functional and tested. They are not the entire feature set. I will also add that "release-to-customer/production" doesn't necessarily mean that he gets the code. Only that it is available to inspect/test/review at those milestones if they so choose.
With this in mind. When I talk about alpha and beta testing. They are the two "tests" in each of these iterative processes. So the customer (who may or may not be internal e.g. another department) gets every opportunity to provide feedback throughout the lifecycle. Just at points where we know it works. We don't rely on them to find bugs for us.
Feedback from customers is that they are very happy with the process. The .release-to-customer/production appear on their waterfall charts (they love M$ project
) as milestones so they can easily track progress and have defined visit dates when either we go to them or they come to us. They also have clearly defined and demonstrable features for those dates.
Rather than comparing a software app to a skyscraper, let's try something a little more apt: a software app to, say, a power supply. If I'm designing a power supply, I'd like to think I'd make a prototype before final release that I might get some of my key customers (whether they be internal and/or external) to play with and give me feedback on. I think that probably makes sense. And I think "alpha" and "beta" can be analogous to "prototype" in the product world.
With this analogy. The alpha would be a PSU on a breadboard and the beta the first batch PCB that had been soldered a hundred times with no chassis.
That's what prototypes in the real world are and that's exactly the same for software. The first production run would be where a customer might get a play.
Sorry, I should have been more clear: I was referring to services vs products.
More semantics. When a customer asks you "what can you make for me" he is asking for product.
Then I guess we philosophically disagree. I prefer to work closely with the customer during the project to make sure everything aligns to thier expectations (if possible, of course
) though all stages of a project's execution. As I alluded above, I prefer not to drop something in their lap that meets all of their initially defined requirements, but doesn't fit thier needs.
I think actually we agree. I don't propose to just drop it in their lap 6 months after they asked for it. Just that when the job is done. It is done to the extent that he doesn't need to come back to me. Doesn't need to phone, send emails, texts or carrier pigeons except to offer me another project because the last one went so well.
I see no problem with a "prototype" kettle
Just don't put water in it or you may find it an electrifying experience
You folks with your luxurious "written requirements" and "specifications"!
All my reqs and specs are transmitted verbally and telepathically.
Not a complaint - it's fun when the customer says, "Wow! I didn't think that could be done!"
Not always written. A lot of the time it is just meetings and I have to write the requirements then get them to agree to them. Amazing some of the u-turns people make when they see what they've asked for in black and white. Or, more specifically, what that glib request will cost them
-
Ok, so we were trying to compare apples to oranges. 'nuff said
Nope. It's software. It's comparing apples to apples.
That's not entirely true, but it's not entirely unture either. I think that some new softwares are expected to be buggy, possibly because they haven't gone through alphas or betas, and/or the market has forced an early release on a less-than-mature product. It's also one thing that makes software so awesome - changes (whether they're bug fixes or feature additions) can usually be made with a much faster turn around than hardware.
The expectation of bugs in software is exactly what I was saying about being "trained". There is no excuse for buggy software apart from that not enough time and resources have been spent on eliminating them. That is the reason why there are faster turnarounds in software, because people will accept defective software whereas they will not accept defective hardware.
That's not always the case, but if we're talking about product platforms here (which we're not
), then it's a great way to go- especially if you have more than one customer: you can weight feedback on feature requests and make strategic business decisions based on them, which can be super efficient during alpha and beta cycles, as opposed to waiting until full releases.
Indeed. However, that can be in parallel with the development cycle where the customers frame of reference is a solid, stable release (rather than a half-arsed attempt at one). If you operate an iterative life-cycle (or Agile as the youngsters call it
) then that feedback is factored in at the appropriate stages. The "Alpha" and "Beta" are not cycles in this case, but stages within the iteration that are prerequisites for release gates. (RFQ, RFP et. al.)
Again, apples and oranges. Image an iPhone user who wants an app that allows them to read their email: if you said to them "I've got an app that does 98% of what you want, but 1% of the time it'll take an extra 30 seconds to download attachments, so I'm not going to release it for another 3 years until it's perfect". The iPhone user goes and downloads another app.
Well. If it takes three years to fix something like that, then it's time for a career change! If you know there is a problem, then you can fix it. The problem is that it takes time/money to fix things and the culture that has grown up around software is that if they think they can get away with selling it (and it being accepted); then they will. If it wasn't acceptable they'd just fix it. I've never been an iPhone user (always had androids). However. If the apps are written by any script kiddy with a compiler and given away free, then, basically, you you get what you pay for.
Also, skyscrapers aren't ever perfect - they're good enough. And I'm pretty sure building owners are given tours of their buildings before they're completed
The phrase "Good enough" is exactly what I'm talking about. Good enough to get away with? Sky scrapers aren't perfect, but they are "fit for purpose", "obtain an level of quality", "comply with relevant standards", "fulfill the requirements" and don't fall down when you slam a door
. The same cannot be said for a lot of software once released, let alone during alpha or beta testing.
As for the type of systems you build, I agree with you - and we make similar-sounding systems too. Such tings are decided at requirements gathering and design phases, and are usually locked down early. That said, we also have a suite of software products we use on these programmes, so it's a balance.
What's the difference? A balance between what? Between quality and less quality? Software is software. The only difference is how much time a company is willing to spend on it. Please don't get me wrong.
I'm not commenting on your products. Just the general software mentality that, at times, makes me despair.
In summary, as Jeff Plotzke said during his 2010 NIWeek session: "Models can, and should, fit inside other models."
I've no idea what that means
That's an admirable goal, and works extrememly well in some industries. I sometimes find it useful to get feedback from end users during development, because, in the end, we want compliant systems, on time, on budget, that make our clients happy. That last item can be tricky if the first time they see it is the last time they see you
It's not a "goal".It is a "procedure" and the general plan works in all industries from food production to fighter jets. Why should software be any different? The "goal" is, in fact, that the first time they see it IS the last time they see me (with the exception of the restaurant or bar of course). That means the software works and I can move on to the next project.
Anyway, it depends onthe idustry, technology , and client. All I'm trying to say is that deriding alpha and beta programmes completely doesn't make sense to me - every tool has a place.
And I'm saying it doesn't matter what the industry or technology is. It matters very much who the client is and, by extension, what software companies can get away with supplying to the client. Public Alpha and Beta testing programmes (I'll make that discrimination for clarity in comparison with the aforementioned development cycles) are not a "tool" and are peculiar to software and software alone. They exist only to mitigate cost by exploiting free resource.
God help us the day we see the "Beta" kettle.
-
LOL.
I'm curious how people decide whether they are doing alpha or beta testing? I've always considered it alpha testing if there are large chunks of functionality that have not been implemented, UI is unfinished, etc. Beta testing is when the software is mostly (> ~80%) feature complete, UI is mostly in place, etc. I've had other tell me they don't consider the software to be in beta testing until it is feature complete and all you're looking for is bugs. Thoughts?
Alpha and beta testing (for me) is internal only. Alpha testing will be feature complete and given to other programmers/engineers to break it on an informal basis. For beta testing, it will be given to test engineers and quality engineers for qualification (RFQ). Sometimes it will also be given to field engineers for demos or troubleshooting clients equipment. Clients will never see the product until it is versioned and released.
-
I agree that you can't dump it in someone's lap totally unfinished, but I couldn't imagine dumping it my customer's laps without them having an opportunity to tool around with it before its officially released. What you're suggesting might work for systems that have few and very well defined features, but once you scale up even just a little, the possibilities of what your customers might do with it grow quickly. Look at LabVIEW: I'm glad they have betas, because, without infinite resources or infinitie time, they can't possibly imagine to test all of the combinations of awesome things we're going to try to make it do.
Maybe we're comparing apples to oranges, or I'm missing your point. It's a thread hijack anyway, so I'm going to split to t a separate thread.
Well. I deal with automation machines and if it doesn't work there are penalty clauses. The clients are only interested in what day it will go into production and plan infrastructure, production schedules and throughput based on them being 100% operational on day 1. Deficient software is something desktop users have gotten used to and have been trained to expect. Software is the only discipline where it is expected to be crap when new. Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".
-
...and the opportunity for your potential customers to help shape the direction of your product by pushing it through use cases you hadn't thought of
How so? They can only put it through use cases if it works. Otherwise they are just trying to find workarounds to get it to work.
Alpha = Doesn't work.
Beta = Still doesn't work.
-
....... and here's my offering using queues for command and events for response. (not QSE I know, but we have talked about it-or maybe that was another thread
)
Does friendship force an item to load with the class library?
in Object-Oriented Programming
Posted · Edited by ShaunR
Well. Consider you have a public function called "Calculate". This function ,amongst others, uses a "Check For Divide By Zero" private function. You can craft a test case that that can be applied to the public function that specifically supplies a calculation that will result (at some point) in a divide by zero. You are using your knowledge of the internal workings of the "Calculate" function to indirectly test the "Check For Divide By Zero" private function. This is "Grey-Box" testing.
The major bonus to this approach is that your test case code can be generic (it only has to interface to the "Calculate" function) and just supply different calculations but test multiple paths through the private functions without having to create code for all-and-sundry. You can even do things like put thousands of your calculations in a file and just iterate through them, throwing them at the "Calculate" Function. The test code is not important, the tests data is what requires consideration and each piece of data can be crafted to test a path or target a specific private function within the public function.
As an aside. The examples that ship with the SQLite API are, in fact, the test harnesses and provide 99% coverage of the API (not SQLite itself, by the way. that has it's own tests that the authors do). That is why the examples increase when there are new features