Jump to content

Shazlan

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by Shazlan

  1. Hi Shaun, Superb and thanks. We've done several tests and the results are... 1. Sony Bravia SmartTV - Websocket not supported 2. Samsung SmartTV - Websocket supported!! 3. Samsung Galaxy Tab - Websocket not supported 4. iphone - Websocket supported We've also tested running the LV remote on some of the above platforms (but it is just for monitoring and thus, no LV RTE installation is requred): 1. Samsung Galaxy Tab & SII Smart Phone (Android) - Can load and view the FP 2. iPhone - Fail So, I think these two results works for us Shazlan
  2. Hi Shaun, I am definitely interested. Actually I'm trying to arrange to test this with one of the my client's distributor for Samsung SmartTV but have to wait until we are ready. If you have something ready, I could request for the test date to be earlier. So, do PM me Thank a lot!
  3. Hi Guys, Thank you for the feedbacks. I was told just earlier today that remote panel apparently can run in Android OS, provided that there's no control needed - only monitoring of data. I was given this link: http://digital.ni.com/public.nsf/allkb/09B82EFACFF958A586256BC800779CB4 and it says that to run remote panel for monitoring only, I don't need to install LabVIEW RTE. I haven't got the chance to test this yet but will surely try this next week on one of Samsung's smartphone/tablet products. What do you guys think? I think I've heard about Websocket a while back but haven't got the chance to play around with it at all since then. I'll definitely try to play around with this next week as well, as I need to come up with a mock-up front panel + some proof-of-concept - huhuhu, fun days ahead By the way, websocket is a third-party toolkit that I need to buy, am I right? Is websocket the same as labsocket? Again, thanks for the input!
  4. Hi Guys, I'm about to start a SCADA-like project where I need to publish some of the results to a few Smart TVs connected to our cRIO via LAN. The current choice now is the ones made by Samsung. I did some research on how to show the results on the Smart TV and believe there are only two possible methods in doing this, which are either using the remote panel, or WebUI. After getting more detail requirements, I think I cannot use WebUI since the GUI provided for WebUI is a bit primitive - there are certain types of customized graph/chart that cannot be done in WebUI. Now, I am left only with remote panel. I've heard 'mix responses' on using remote panel for web-based access via PC and was told to expect a lot more issues and headache when trying to do this on a Smart TV. Has anybody work or done something similar? or perhaps, can this be done in the first place? Please advice. Thanks. Shazlan
  5. Hi guys, I've developed very few remote monitoring systems in the past. One of them was using a PXI RT and the rest are cRIO. The approach and architecture were based from some of the things I've read from ni.com and this forum. In the process, there were much difficulties and some extensive troubleshooting exercises that I need to do. The results, while the system work and meet the user's requirements, it didn't meet my own expectation. For instance, I was hoping that the system can be expanded (adding more cRIO or PXI) with much ease and little or no re-programming effort. Anyway, 2-3 years have passed and opportunities with similar requirements has emerged. So, I would like to get started to think about the architecture at an early stage (ie. now). In my past systems, I've used NSV a lot - and it gave much much headache too. Some of the troubles I had were: 1. I can't decide whether to lump all SV in one library and host them in one system, or to separate them into various libraries and systems... neither do I know what's the best approach, as I've read too many 'suggestions' and 'advices', 2. Some of the SV are from custom control and the control is type-def. When running the VI in RT with these SV in development platform, everything works smoothly but when I compiled and deploy, the program didn't run. After extensive troubleshooting, I found out that this had something to do with these SV - because when I removed the type-def from the custom controls and recreate my SV, everything worked fine. I suspect this may have something to do with how I deploy but after I tried several approach, the problem still persist. 3. The best and most common of all is unstable connectivity - it work today but that doesn't guarantee it will work tomorrow. When the host PC changes, the same problems resurfaced again. I read somewhere that I need to read or interface with the .alias file but this work some times and other times, the same problem persist. Attached is the most common architecture that I've used in the past. I would like to move away from NSV as much as possible. If the application is 1:1, there's no problem as I can easily use TCP/IP & Network Stream. However, my doubts and headache comes when the RT:Host communication is either 1:N, N:N or N:1. I've read in ni.com and found out that there are various new approach to this, such as STM, AMC (derivated from UDP), Web Services (or was it HTTP). I really appreciate it if you guys share your thoughts and advices here, please? Remote Mon Arch.pdf
  6. Hi guys, Yesterday, I was troubleshooting my code for memory leaks. Then, a "Memory is full" dialog box popped up. I clicked OK and tried to save the VI. The same dialog box popped up again but this time, LabVIEW crashed after I clicked OK. As a result, I now could not open my VI - everytime I tried, I got the LabVIEW Load Error Code 1 message (see attached). I tried to open this VI (also attached) from another computer and the result is still the same. Is there any way this file can be recovered? I really hope that there's light to this issue. Thanks. Shazlan Signal Module.vi
  7. I'm using NI's CVS. So, there shouldn't be any driver or imaqdx compatibility issues. I've posted the same question at NI's forum and here are some of the replies I got: http://forums.ni.com/ni/board/message?board.id=200&message.id=26520&jump=true#M26520 What do you guys think? Unfortunately, I don't have the system with me right now - it's at the site, and am planning to make a visit there to correct the issues next Sunday. I hope to have the answers before leaving there. Lastly, I really appreciate it if somebody here can advice on the concerns that I have. For convenience's sake, the issues are copied and pasted below: 1. I think I can get away with 3.75 fps but I need to read and monitor every frame that the camera captures. I personally haven't done this but it can be done by changing the imaqdx buffer mode to 'buffer number' (as oppose to 'Last') and wire the number of buffer frame that I want to acquire. This buffer frame number will increase +1 at a time and once it has reached the max, I will need to reset it back to zero (0). Am i right? Also, am I right to say that when the camera has filled up the buffer, it will overwrite the first image in the buffer? 2. What worries me is I have encountered ceiling memory limitation even before any processing has taken place. In my application, there will be three to four rejection criterias that I need to detect. The way I am planning to do this is by creating an image (using IMAQ create) for each process and destroy it once the processing is done and I got the results. In order to save processing time, I was planning to run all 3 or 4 processes at the same time in parallel. Now, with such limited memory, should I change my plan to processing one criteria at a time (IMAQ Create for criteria 1 - Process 1 - Destroy - IMAQ Create for criteria 2 - Process 2 - Destroy - ...). Can someone advice the best programming structure to do this? 3. Lastly, is there a way for me to estimate the amount of RAM I require to run my program? CVS-1456 has 128MB of RAM. How much 'extra' do I need to leave unused for the machine to operate efficiently without errors? Thank you.
  8. When I am running only one loop (ie. one camera), the application runs perfectly at any frame rate (up to 15 fps). If I run the application with both loops enabled at 3.75 fps, there seems to be no problem. I got the error when I am running both loops at 7.5 fps.
  9. Hi Crelf, Thanks for your reply. Attached is the basic code, which I use that get those errors. Ultimately, the code will have APIs for machine vision inspection, which I am developing under the simulated environments - from image files. Thank you in advance! Shazlan Basic Forum.vi
  10. Dear all, My application involves an inspection system using a CVS-1456 and two cameras DFK41BF02 from Imaging Source. The specs of the cameras are 1280 x 960 pixel, 15 fps max, 1/2" CCD. I'm using LabVIEW 8.6.1 for the program. Initially, in MAX, I've set the video mode for both cameras to be 1280x960 (Mono 8) 3.75fps and the speed of transfer to be 200Mbps for each camera. The program, which was using high-level imaqdx functions, worked fine. Later, I find out that I need both cameras to grab simultaneously at a speed of 7.5fps and thus, changed the settings in MAX accordingly (increased the video mode to 7.5fps). When I tried to rerun the program, I can only capture from one of the camera while the second camera gives me Error -50150 (as can be seen attached). I've checked whether there's enough bandwidth to run both cameras at 7.5fps simultaneously with the program that I found here: http://digital.ni.com/public.nsf/allkb/F7F4DA6482C401278625732D0066EF4E It says that I am only using 52% (26% for each camera) of the bandwidth. So, there should be no problem with the bandwidth. What I did further was to use the low-level imaqdx functions and set the buffer to be 30. I still get the same error. I tried using a smaller buffer and when I reduced it to 2 for both cameras, I received a different error (still error, though) -1074360320. Please explain why and how do I solve this headache.
  11. QUOTE (dblk22vball @ Feb 27 2009, 10:28 PM) Hi, I am working with Hamiza on this. I've written a simple VISA write and read to interface with the device (see attached) but to no avail. The VI either writes character by character to the serial port with 2 ms delay in between or send all characters at once - I honestly am not sure whether it makes any difference or not. You can refer to 3.2.1 as to the command I used to send to the device. In hex it is 1c00 0068 0d. I can't seem to figure out what is 1c in normal character (the manual refers this as FS. What is FS? Free space? Full stop?) Apart from '1c', the rest of the commands in normal character is 00h(CR), which (CR) refers to carriage return. What could be wrong? Ideas?
  12. Thanks all for the comments and replies. I've finally replaced the hub and have checked to ensure that it is working. I've tried two connection configurations (please see the attached jpeg). For the first configuration, I've connected my firewire cameras to the hub (which is powered by a DC source) and then, connect the hub to NI's Compact Vision System (CVS). All cables used are 6pin to 6pin firewire cable. Under this configuration, I could see the cameras and everything works fine. However, the problem is with the second configuration (which is the configuration that I want). Instead of using CVS, I replaced it with a laptop with a built-in firewire port. This port, however, is a 4-pin port. According to the local NI support, this should not be a problem as the other two pins are meant for power and the hub is independently powered. After I have made the connection, the camera doesn't appear in MAX (which I assume means that the computer does not see the camera). I've tried using several laptops and the problem still persists. Any ideas? This headache has become a great annoyance. All assistance and advices are highly appreciated.
  13. QUOTE (crelf @ Dec 4 2008, 09:57 AM) I went back to Mac (they sold me the hub) and they confirmed that the hub is faulty. The project is now delayed for two weeks! While thinking on ways to get around this, I've been thinking... I have a Magma 1-slot PCI expansion box. This box was to be connected to a laptop via ExpressCard but I believe the card was damaged a while back (the box is still fine, though). Now, if I were to insert my PCI card inside the expansion box, power it up and use it as a hub. Does anybody know whether this is possible?
  14. QUOTE (Neville D @ Dec 4 2008, 01:50 AM) Thanks. I've installed the camera driver to the computer. If I were to connect the camera straight to the PCI card, I'm able to see the camera in MAX and choose the right driver for the camera in MAX. Driver or not, the main thing is that MAX or the computer sees the camera. However, when I use the hub, the computer sees nothing. As far as power to the camera if using the hub, that I am sure of. Although the laptop's firewire port has only 4-pins, the hub itself is powered from the wall socket using AC/DC adapter. Plus, after I've leave the camera connected to the hub for a while, I can feel some heat when I touch the camera. How do I check whether the camera supports working with firewire hub or not? I thought it is part of IEEE1394 (or was it DCAM?) protocol/standard that a computer can be connected to up to 50+ firewire devices at one time (at least theoretically). Nonetheless, I've checked NI forum and found a similar application (computer-PCI-hub-camera) done by others. The problem this guy faces is when he wants to connect to 2 hubs using one PCI card (bandwidth issue).
  15. Hi, I have a desktop with a PCI firewire card installed and a firewire camera connected to one of the port. This setup works fine. However, I recently purchased a new firewire hub from Belkin (http://catalog.belkin.com/IWCatProductPage.process?Product_Id=193393) to increase the number of firewire ports. The hub is powered externally from a DC source. The problem appears when I make the connection from the camera to the hub and then to the PCI card. The computer does not seem to be able to 'see' the camera - the subtree imaqdx does appear in MAX. I've checked the hub and there's power to it. I have no idea how to check whether the hub is working correctly but since it is brand new, I assume it is okay. Also, I am hoping that I could connect the hub to my laptop. The laptop has a 4-pin firewire port but since the hub is powered externally, this shouldn't be a problem (or should it?). I've tried but have the same problem as above. I assume that once the camera is detectable in MAX via the hub, the same solution can also be adopted for laptop/mobile applications. I've browsed through NI forum and found out other people have done similar setup before and it seems that it is a rather simple setup. I'm running out of ideas on troubleshooting this bugger. Any help, hints or advices are very much appreciated. Thanks. Shazlan
  16. Dear all, I am using PXI 1422 image acquisition card to capture images from FLIR SC3000 camera. I'm currently using LabVIEW 8.2 and FLIR's Thermovision Digital Toolkit. I'm writing this LabVIEW code for one of my client. He wishes to use LabVIEW program to acquire and capture thermal images and then, use FLIR's Reseacher software to analyze these images later. So far, I am able to capture the images using LabVIEW program and everything is fine and dandy. The problem is in saving the captured images. FLIR's Reseacher can only accepts eithere .img, .seq or radiometric jpg files. The .img and .seq format is proprietary. Question 1: Does anyone knows how to save image file radiometric jpg? I've asked FLIR whether the images I've captured using IMAQ grab can be save into .img or .seq file using their Digital Toolkit and the answer is no. My discussion with FLIR can be followed by going to flir.custhelp.com and search for message # 070927-000001. There, I was told that the Reseacher reads 16-bit image file with header embedded. Question 2: Isn't jpg file saved by LabVIEW in 16-bit? Question 3: If not, please advice on how to further manipulate the images to meet the requirements. Has anyone here done similar things? and with great success maybe? Help if much appreciated. Shazlan
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.