Jump to content

ShaunR

Members
  • Posts

    4,855
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. Is this a .Net DLL or something? 508kB for these few functions seems a bit excessive to me. Or is that the price one has to pay nowadays for using VC 2008? :wacko:

    Rolf Kalbermatter

    No its a Delphi DLL. It can actually do more than just those 2 functions, but this thread was asking for serial numbers.

    Saying that though...... Getting the info is far from trivial.

  2. We have 3 machines now with LV 2009 they all exhibit the same problems. They all have different hardware (and therefore graphics chipsets). 1 is a laptop, one is an industrial PC another is a desktop. 2 are running windows XP professional x32 and one is running vista utlimate 64.

    When using debug execution, if the diagram is in the foreground, it does not switch cases to the case that is executing. Rather, you have to search through the cases to find the case where half the subvi's and wires are normal and the others are greyed out.

    If the diagram is in the background or you switch to the background whilst debugging, subvi's icons don't "ungrey" and you loose the arrow that indicates which subvi is currently executing. You can still see the data travelling along the wires though.

    Has anyone else come accross this? Is this a known problem? Is there a solution?

  3. Well you don't have to be technically. A while ago (just to see if I could) I made a single EXE that was LabVIEW 7.1, and 8.5. 7.1 because it is my favorite and does just about every thing you could want for most applications, and 8.5 because that was the newest version at the time.

    So imagine carrying around a 600MB EXE (for 8.5 or 300MB for 7.1) on your drive that you run and you have a full LabVIEW development environment available on any Windows computer. I was going to post how I made it, so others could too but it's been a while and I'm not sure I remember the steps needed, but I do still have the EXEs floating around maybe if I get time I'll try to make it again.

    The main issue with this is most people don't have just LabVIEW installed, they have several toolkits for projects they are working on. Adding toolkits to this EXE after it is built isn't impossible it just would take alot of work. And installing new packages via VIPM wouldn't be impossible it would just be manual. There could be a directory structure next to the EXE, which the EXE then uses as relative files. So lets say LabVIEW85.EXE and in the same folder is $user.lib$ and what you put in there will be used as extra files in the user.lib.

    So basically this EXE would only be useful for making quick VIs, or opening small ones and seeing what's going on, but if the VI needs any toolkits it will be broken.

    Not sure how you can compile the Labview Development environment into an exe (unless you mean you re-install it all on the target machine). The only problem with installing LV on a thumb stick is that when you go to a new machine it has different mac addresses so LV licensing crowbars you (I did try, he, he). The good thing aboout booting from a thumb stick is not only do you have a full development system with all those indispensable tools you've aquired over the years, but it leaves the target machine clean.

  4. Not quite sure what you are doing here.:unsure:

    You are merging the generated and the echo response analogue signals and then outputting the result to the AO device. So the merged signal cannot be constructed until it has received both signals. It will then output the result to the analogue output device.

    Wire the output of the signal generator directly to the analogue out device (remove the merge) then merge the analogue in with the reponse waveform from your echo server. I'm assuming your analogue out is connected to your anolgue input so I'm not sure why you would do this since you don't need the analogue devices at all.

  5. Hi all,

    I'd like to know if anyone has ever used WinXP Embedded on a PXI controller, apparently NI is recommending this OS (Running LabVIEW on Windows XP Embedded, What Is Windows XP Embedded and Why Should You Care?, NI-DAQmx Support on Windows XP Embedded) but we can't purchase a PXI controller with WinXPe pre-installed.. this seems a bit surprising to me : "I recommend this, but it's not in my product list"

    Anyone can share some experience about LabVIEW + DAQmx running on WinXPe?

    One more thing I read about WinXPe SP2, is the HORM "hibernate once, resume many", to boot up quickly in a specific configuration, that looks interesting for an industrial system, anyone ever used it?

    Thanks for sharing experience about that :)

    We don't use it on PXI, but we use it on Fanless PC's which we run our final labview apps on. It's been a long time since I had to get the image working and I remember it took me quite a lot of trial and error to find everything I needed. But once you have, you don't have to revisit the process again and you can install things like normal XP.

    Why not try it out. You can download XP embedded on trial. Find an old PC no-one wants and give it a whirl.

  6. My post just prior to yours was full success. I'm running LabVIEW 32 to actually do my work. I only installed LabVIEW 64, based on other suggestions in the thread, in order to get all of the hardware drivers loaded. I doubt I will ever use LabVIEW 64 2009. It looks like it is only good for NI-Vision work; No other modules are supported.

    It looks like the 3.2.1 install/upgrade doesn't play well with a 3.2.0 install.

    Now I'm just beating my head against the wall trying to get the FPGA to compile; Doesn't want to fit in 3 MGate anymore :frusty:

    I'm beginning to think that each new version of FPGA compiler becomes LESS efficient at using slices. Are the NI-FPGA and Xilinx boys taking lessons from Microsoft Windows? :cool:

    It's funny that you say the cRIO is far too expensive. Compared to equivalent hardware from non-NI companies: it's the cheapest. I do agree that there are other cheaper solutions but we needed the rugged external i/o without requiring continuous connection to a PC.

    Well. I suppose its what you are used to and your requirements (if you really, really need deterministic control and only want to use LV, then theres not much of an alternative). I presume you are in this camp since you are using an FPGA.

    We have a high IO count (typically >96 DIO lines) so your talking mega cash with cRIO. We use RS485 digital IO boxes (each has 32 In, 32 out, 24v @ 0.5A per channel. Cost to us about £80 each) running at 1MBps which is perfectly adequate for near real-time system control. But of course they are dumb IO. We usually connect them up with a Intel Atom Fanless PC running XP Embedded which we can also put a PCI card in if we need to. Total cost about £1200 and more IO than you can swing a cat at (plus you get Gigabit LAN, USB, RS232, 5v GPIO etc).

  7. Get use to constantly needing to clean up the hard drive. My drive is only 64GB, my National Instruments folder alone is over 10GB. Then there's Windows, other Program Files, several SVN repositories, and several gigs of music. I tend carry around a a 2.5'' external drive that I keep larger files on (like virtual machines)

    I've got a usb stick that big...lol.

    In fact it (now) has windows 7, Codegear and VC++. I pretty much carry my PC in my pocket and just pick a vacant machine if I'm not at my desk or on-site or forget my laptop.

    I'm still chained to a chair with Labview though.

  8. True. If, on the other hand, it is an option it is a good one. For the record, I think for a number of reasons that the functionality in the DSC Module ought to be part of the LabVIEW core. In particular, the functionality in the DSC Module is an extension of existing functionality in such a way that it can take a while to figure out where the boundary lies. Moreover, the publish-subscribe option (Observer pattern) is extremely useful--and pretty much a common programming standard--and NI ought to promote its use in most applications. I think doing so would make LabVIEW development more effective and presumably enhance its marketability in turn.

    I didn't try this, but I'm guessing from your question that it will be big. Then the developer must choose whether the larger memory footprint justifies the ease of development for the particular application. (I also presume that the footprint does not scale linearly with the number of shared variables.) I think for many (most?) applications the larger footprint is not a serious issue.

    I don't have a problem with large installations per say, but when you have to deploy an installation of a couple of hundred megs to run a 300k executable, it makes Microsoft look positively anorexic.:nono:

    I could understand it in the old days when LV run as a virtual machine giving you 32 bit applications in a 16 bit world. But now its just a huge api with loads of dependent apis that have nothing to do with your program (like PXI for instance).

    ---rant over :P

  9. This is correct, but i think that creating this behavior is simple. You could use events to create this situation - for instance, use an event-driven loop to wait upon a variable change event and not allow any other code execution until the event takes place (say, by using a notifier to notify local VIs!).

    Put a single network shared variable in a vi. Save it. Build it into an exe and then build an installer. How big is the installation?

  10. Is LabVIEW getting slower and slower or my Quad-cores PC?

    I have time spare now for LAVA while waitting for my LabVIEW to load, Project to load and creating Executable...

    Some people getting coffee while wait, some getting beer_mug.gif:)

    If i have to do some development on the slow production PC P4 1G RAM sometimes i just want to throwpc.gif or frusty.gif

    I use quad cores and yes 8.6 was a dog (nearly 1 minute to load). I commented on another thread that LV2009 was far quicker (4-5 seconds).

  11. It took me longer than expected to reload Windows 7 64-bit RTM.

    I ended up doing the following:

    1. Installed LabVIEW 64 by itself (since it isn't on the DVDs afaict)
    2. Ran normal DVD installer and picked all the toolkits for which I have licenses. The installer seemed to recognize the existing LabVIEW 64 and automatically activated the appropriate entries for it along with LabVIEW 32.
    3. Reboot
    4. Ran the NI-RIO 3.2.1 installer from NI.com
    5. Reboot
    6. Opened my project and went to the CRIO FPGA node

    I'm further along than before since Realtime nodes are now recognized. But LabVIEW still thinks I don't have all my files. Look at the screenshot:

    post-11254-125129651763_thumb.png

    I tried rerunning NI-RIO 3.2.1 and picking all the options. It resulted in a no-op and didn't see the need to install any files.

    A google search didn't give me any useful hits. Any suggestions?

    Progress :)

    The compact RIO (cRIO) is a separate installation on the DD DVD.I didn't install it since I don't use cRIO (far too expensive) so cannot verify the installation, but looking at my installation I have NI-RIO 3.2.0 and NI-RIO 3.2.0 real time driver installed since vision is dependent on them (quite why I have no idea). I would suggest going back to the DD DVD and installing the Reconfigurable IO section and all its dependents and see if that works.

    One thought. It you are targeting cRio, it probably isn't 64bit so compiling to 64bit isn't really an option. It might be that the target is only possible on the 32 bit version. Whilst NI do supply FPGA cards for PC's, they probably can be targeted (and therfore the need for the NI-RIO driver core), hence the "Module Not Supported" lines in your screenshot.

  12. I'm sorry to All,

    I forgot to wrote the error code that application return back. So, it become more difficult for yuo to help me.

    This is the error that return from my application:

    Error -1073807253 occurred at VISA Read in COM_Port_Handler.vi->Serial_CORE_Engine.vi->Main.vi

    Possible reason(s):

    VISA: (Hex 0xBFFF006B) A framing error occurred during transfer.

    I opened a ticket with NI and an engineer answered me that this happen becouse there are noise on the line and I loose some information on the package. But, I don't understand why this happen for normal RS232 pc port and don't happen with FTDI USB converter. I use only one line and one software.

    For who want to try to read the ticket, this is the address: http://forums.ni.com...message.id=2065

    but it is in italian.

    For the moment I resolved the problem cleaning it.

    P.S. Sorry to all for my bad eanglish.

    This usually occurs with an incorrect baud rate or a baud rate that is not one of the "standard" ones.

  13. This is correct, but i think that creating this behavior is simple. You could use events to create this situation - for instance, use an event-driven loop to wait upon a variable change event and not allow any other code execution until the event takes place (say, by using a notifier to notify local VIs!).

    Ooooh. Its all got very complicated, very quicky. Now we have variables, events AND notifiers :P

    Take your proposed topology. Replace the variable with a queue. Don't bother with events since the queue will wait and you have what I said a few posts ago about a 1 element lossy queue!

    The only reason you can't use a single notifier is that you have to already be waiting when it fires or (if you use the history version) you have to clear it before you wait.

    The DSC Module allows one to create shared variable value change events that one can wire into the dynamic event terminal of an event structure.

    The DSC Module also allows one to create shared variables programmatically at run-time. (I see jgcode just mentioned this.) Currently this feature only supports basic shared variable types, unfortunately.

    In our code we use shared variable events a lot and they work great. In practice we haven't needed to create SVs from scratch at run-time yet. We have done something similar by programmatically copying existing shared variable libraries (with new SV names) and then deploying the copies, which is a useful way to work with multiple instances of a component.

    Shared variables have come a long way from their original instantiation and I think networked shared variables are a pretty reasonable implementation of a publish-subscribe paradigm. They can be pretty easy to implement. (Don't get me wrong, there are some things I still want to change, but we find them quite useful.) I recommend taking a fresh look at them.

    Paul

    Not every one has (or can afford) the DSC module. Queues, notifiers and network shared variables all come as standard. Coding around it isn't difficult with the built in tools. Its just bloody annoying when a single notifer with a history that gets checked off everytime it executes would halve the code complexity. In fact, it shouldn't really be called a "notifier with history", perhaps a better name would be "notifier that gets round the other notifier bug"....lol.

  14. Hi, I have used a tresholding method to my original image and I get a black and white image. I need to transform it to a binary image. How to do it? It is still grayscale in output. Thanks.

    Instead of replacing threshold values with 255 (to get your black and white) replace with 1. Your picture control will look all black. If you want to see the results in the picture control, right click on it and select Palette>binary. You will then have a red and black image. Either your black and white or red and white images (binary) will work with mask functions as effectively they operate on zero and non zero for binary operations.

  15. Hi!

    I've tried to search the Google and the dark side, but I didn't find anything useful. I'm asking here first because there's usually more knowledge around.

    Here's the scenario:

    I'm downloading all files in a directory from a FTP server in LabVIEW 8.6.1 using the NI Internet Toolkit and the FTP VIs .

    If a local file exists, I'd like it to be overwritten. I'm using 'NI_InternetTK_FTP_VIs.lvlib:FTP Retrieve Multiple.vi' to do the job. It does the job just fine if the files do not exist on the local computer. But when files exist, it returns errror code 8 with message 'EOF' for the offending files in the file error array. The LabVIEW help doesn't give any clue in the VI documentation, as usual.

    Now I've pinpointed the problem in 'NI_InternetTK_Common_VIs.lvlib:TCP Read Stream.vi' and 'NI_InternetTK_Common_VIs.lvlib:OpnCrtRep File.vi': The input parameters to latter indicate an operation to create or replace a file in read-only mode. See the problem already? Inside 'OpnCrtRep File.vi' the operation parameter is happily ignored when actually trying to open the file, but read-only mode is set. Next, when creating or replacing a file, the VI attempts to set the file size to 0. Which gives the error code 8. Nice.

    :throwpc:

    A question: do I have to roll my own solution? To my eyes either the FTP VI is broken or then I'm just being stupid. The documentation says nothing about this and as I understand from the comments inside the code it should overwrite the file.

    Anyways, time to go to the big blue room to calm down...

    Cheers,

    Jarimatti

    I would defensively code a check to see if the file exists, force its attribute to be writable and delete it before trying to save (I actually do this anyway with overwriting files and have done for years). Then phone NI to see what they say. There may be a reason for it.

    • Like 1
  16. Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous.

    Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues.

    Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).

    I'd prefer a "Wait on notifier history" that only executed the number of elements in the history. LV 2010?

  17. Unfortunately, you can't time an animated gif well enough. The animations have too loose a time slice to be reliable, and anytime the UI thread gets tied up, they can hang. Further you have no ability to reset it in LabVIEW -- whenever you launched the VI, that would be the time displayed.

    A pendulum swinging is just eye candy. You don't need to synchronise it.KISS.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.