Jump to content

wallyabcd

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by wallyabcd

  1. While Rolf and other responders are entirely correct, I have in the past attempted similar exercises with some measure of success by limiting myself to certain types of interfaces at a time such as RS 232 and RS 422 or Ethernet like interfaces and using this pattern: Msg -> Msg+ Envelope -> Interface”. In fact, I last created such a thing a few months ago for an instrument simulator that supports ASTM LIS2 A2 via RS 232 as well as Ethernet.

    In General, I  structure the program as follows: An initial configuration program to open and configure as many interfaces as needed and wait for communications, and to close them on command. This program was generic enough, taking all the usual parameters of RS 232 if that was the interface or Ethernet if that was the interface. This program can be standalone or part of the State machine described below. The ASTM formatting will be done in the state machine below. 

    The next step is to do a state machine that detects an incoming connection or outgoing connection request, then opens and initialize the appropriate ports or channels. This state machine should have at least 6 states. Incoming connection request, outgoing connection request, Subsequent Incoming messages from a device, Outgoing communications for a device, Communication Termination request etc. Messaging then proceeds as follows:

     

    Incoming: Interface->Source->Msg Reception->Envelope Decoding->Msg Decoding->Response/Handling

    Outgoing:  Destination->Message->Envelope Encoding->Interface->Msg Transmission->Status

     

    Envelope Decoding is stripping the communicated message from Envelope (Comm protocol + ASTM format)

    Envelope Encoding means taking the Msg to be transmitted and formatting it for transmission 

    The envelope(s) will contain the Destination+communications configuration + the ASTM LIS A2 overlay formatting etc.

    The MSG Decoding is simply a Case structure with all the expected commands or type of messages that are expected with one reserved for exceptions, and how to decode and handle them.

    The Msg Encoding is a case structure for all the different type of commands or messages that can be transmitted and how to format them.

    From here on, you can see that the case structure names can be loaded dynamically from a file based on configuration or replicated for each type of interface, or even a case where a new case structure corresponding to a new interface or instrument type could be added when needed.

    In general, this pattern of “Msg -> Msg+ Envelope -> Interface” is very flexible and powerful. I originally used it about 12 years ago to enable real time priority based communications on an implantable surgical instrument with distributed monitoring (15 clients with 1 master controlling the instrument)

    It can greatly simplify the creation of a simulator as you can switch the envelopes or interfaces or messages even dynamically. 

    Sorry no example as I will have to recreate it, maybe in the future if enough interest and time…

    The important element is the pattern and understanding it, most Labview programmers can create something adequate. 

    I hope this can get the juices flowing for more productive discussions 

  2. Specialist in Software Development& Engineering, Electronics Instrumentation, Medical devices, Dev / Project Management, V&V; working with start-ups and established companies to rapidly develop innovative products

    Labview, CVI C/C++, Full dev lifecycle and more, available in Switzerland for employment or contract work

    Wally

    Geneva / Vaud / Switzerland

  3. Hi gentle men !

    Who can do me a big favour and convert the vi to labview 8.5 or give me the parameters to call it from labview 8.5 without the wrapper ?

    This could solve my little pesky problem

    Thanks

    Wally

    ' timestamp='1252322442' post='65482']

    Well. Your drive also seems resistant to WMI as well ohmy.gif The pervious DLL used 3 methods, the final one being querying the WMI database.

    Hmmm. Digging deeper........

  4. Hello Bruce;

    Sorry to hear of your problem...

    I suspect it may be similar to mine.

    You didn't mention though, is your target an RT system ?

    If you're then read on...

    I installed labview 8.5.1 and open a project that worked fine under labview 8.01.

    Now after resolving some issues concerning the fpga, I was able to compile my project.

    After updating my realtime chassis, I copied all the files to the usual places.

    Immediate crash, with message on a connected screen:

    MKL FATAL ERROR: Cannot load neither mkl_p3.dll nor mkl_def.dll

    I decided to replace all my dependencies with the older versions and this worked.

    I was able at least to boot my application.

    Then one by one I started exchanging them for the new ones until the RT system failed.

    At which point I knew it was "lvanlys.dll", yes the advance analysis library.

    To make a long story short, the App builder for some reason an upgraded version of this file

    in 8.5.1 which no longer appears able to work under the RT system.

    Upon searching the NI installation directory on my PC, I found an RT version of this file named lvanlys_RT.dll

    Replacing the old lvanlys.dll with the above file also worked, but putting both of them at the same time on the Rt

    fails. It seems NI is looking for the old file first and then if it doesn't find it, the second one.

    Trying to run my program in the dev environment makes it behave just like yours.

    So you can try replacing the above file like I did.

    QUOTE (bmoyer @ Mar 27 2008, 05:44 PM)

    My main reason for writing this post is not that I expect solutions to my problem, but that I need a way to vent the problems that I've been experiencing with the projects files.

    I recently had to reinstall LabVIEW 8.5 (also 6.1, 7.1.1, 8.0.1) on my PC (basically I had to start from scratch) and now my LabVIEW project files don't work! LabVIEW hangs when opening them until I end-task LabVIEW. So basically I'm left with recreating them from scratch! :throwpc: You would think that if LabVIEW can't find a file in the project, it would tell you instead of hanging. I see project nightmares ahead (and now) if NI doesn't fix this soon. Every file has to be in exactly the right place or else you have to start over (or try to manually parse through the .lvproj xml file with hundreds of files in it to determine which one(s) "might" be the cause).

    I've seen similar problems with LV project files in the past and fortunately I was able to retrieve an archive of the project file, but this time this isn't working!

    Can anyone else relate?

    Bruce

  5. Hi Alexandro;

    True, Lua can be adapted...

    However I was looking for an off the shelf solution that would allow me to run it on labview RT.

    Testing the DLL's that come with it with the RT DLL compatibility program fails...

    http://digital.ni.com/public.nsf/allkb/0BF...6256EDB00015230

    This means remaking lua as a DLL and removing all the windows depencies.

    Just even to see if it will work, I attempted to call some of the DLL included in lua from the development environment targeted to an RT system, and it didn't work. I then tried to deploy and it wouldn't...

    The main aim of me embeddeding lua was to make it easy to sequence a complex instrument control, and allow the control programs to be modified without

    touching the base code. I can not justify the effort needed to read the relevant documentation in order to modify it; so I have done a very primitive & limited labview implementation for now. It's not fully satisfactory but it will have to do as I know that long term, we need something more and I am trying to avoid reinventing the wheel here.

    It even mentions on Lua's website the need to support RT on a future version...

    My needs are essentiall a DLL or two with exported functions that can be call from labview to either intepret a line of script or a block and return the results to labview. UnderC came the closes, but just won't embedd because of the depenendcies of the windows memory manager.

    Unfortunately calexpress doesn't quite do it on RT either.

    Thanks

    Walters

    QUOTE(Ale914 @ Jul 31 2007, 03:14 PM)

    What kind of test do you performed with LUA interpreter?

    LUA is opensource so there is no problem to adapt (recompile) for our platform, i try with succes to recompile for Linux running on RISC 32 Bit processor.ù

    The problem may be LuaVIEW because it is a compiled version of LUA + C code to interface it with LabVIEW functions.

    Right now i haven't any RT system for testing luaview so i would like to know your test result.

    I also ask for this on LuaVIEW mailing list.

    Ciao,

    Alessandro.

  6. Hi;

    Anyone aware of any script intepreter(C/C++...) or processor that can run under labview Rt ?

    The problem with the current intepreters like Ch, Lua, Python, UnderC are the dependency problems

    with other libraries like Kernel32.dll and so forth...as such fail to work on RT.

    I got UnderC to work under labview, but failed under RT because of kernel32.dll...

    Even adding the file to the project, it still fails; in anycase, I wouldn't want to load another memory manager on the RT

    I am looking for something that comes as a DLL with header files...

    I have a control application that runs loads and runs very large text script for control...

    I also have a very simple language (intepreter) that understand the scripts.

    It was initialy kept very simple but now, we would like to extend the language, add loops,

    local variables in the scripts and inline evaluation of operations.

    for example

    m1=move_stage(position=100)

    m2=read_stage_position()

    m3=m1-m2

    if(m3>5)

    log_msg(position error)

    blah blah blah...

    else

    log_msg(position correct)

    blah2 blah2 blah2...

    end

    Ofcourse what is not obvious from the above example is that only the if statement is processed by the external

    intepreter while the rest of the lines are processed by our build in intepreter. The idea is to farmed out the loops,

    branching, and mathematical processing to the intepreter such that I can remake the application by changing the script.

    My labview functions are already scriptable...

    Needless to say this a bit more complicated that it appears because I have to cache the results from each call in a stack

    ...

    Walters

  7. Hi;

    Simply setting the TCP transfer to passive mode solves this slowness problem...

    My installation upload time went from 16 minutes to 1:40 seconds !

    Walters

    QUOTE(wallyabcd @ May 22 2007, 11:59 AM)

    Hi;

    I have the samilar problem as you...

    It's not that slow, but still slower than I want.

    Using the internet toolkit, http file transfers are blazingly fast. Transfers of hundreds of kilobytes appear instantaneous.

    Now the bad news is ftp from labview Rt is slow, though in this case, I have decided to use it as it's used only for my application installer,

    and appears quite slow. Though I have a http facility I can use build in, I want the installer to be very generic, such that other ftp tools could even be used to install the application.

    Now the good news.

    One thing that may speed things up a bit is not to use binary mode transfer for every thing.

    A second method is to transfer many files simultaneously... many commercial tools do this.

    The third method is to use the dos transfer tool build into windows(dos command line)

    You can call it from labview RT. I have had some problems with this approach though as

    sometimes my large transfers(even in binary mode) were truncated

    good luck

    Walters

    Spinx

  8. Hi;

    I presume yo are using the binary mode to log the data; otherwise you would not have this problem !

    Logging the data in ascii solves this problem; but then has some overhead.

    I have been doing this for the last one year and with proper thread management I get good

    performance on the RT using a highly distributed program with heavy data logging.

    If you want to go with the binary road, things are more difficult as in an embedded operations, your instrument

    needs to recover automatically in case of power failure or unprepared power down.

    If you don't close the file properly by flushing the stream, then you'll get an error opening the file under windows...

    In this case like one of the users mentioned you have to flush after every write to force the system to commit

    the data to the drive. This also has some small overhead too.

    In general I find that unless you are doing data acquisition, using the ascii format is worth the price in most cases.

    I was very hesitant to do this at the beginning, but I appreciate being able to look at the log files with any simple viewer.

    I regular produce text files of 30 megabytes without problems either or overly slwing down of the process.

    Walters

    Spinx

    QUOTE(JustinThomas @ May 11 2007, 06:55 AM)

    I would like to reduce the file opening overhead each time, so the file is kept open for the duration of logging.

    I thought in the RT system there was no shutting down sequence, it was just power off.

    Right now the solution I have come up with is to use a digital input wired to the power off in my main system as an input to my RT device. The RT device is powered off after two seconds of the digital line going high.

    I check for this input in my RT application and perform my clean up operations like file closing.

    I would like to know if there is a way to do this through software. Does the RTOS generate any event before switch off which I can use?

    Regards,

    Justin

  9. Hi Ben;

    The code is configured to use the other1 thread in Labview RT and not the UI thread and the

    function calls are marked as reentrant as we call them through a sequencer and there is no

    risk of multiple calls...

    Therefore the icon is colored yellow...

    QUOTE(Ben @ Jun 28 2007, 03:07 PM)

    Is your call configured to run in the UI thread (yellow vs orange node)?

    Ben

  10. Hi;

    I am calling a computation DLL(written in Labwindows) from labview RT. The DLL is marked as reentrant and given it's own thread.

    When certain fitting functions are called from DLL library, it seems to block the response from the rest of the threads momentarily.

    I was under the impression that this won't happen as the DLL has it's own seperate thread and that under all circumstances, labview RT

    will allocate some time to all six base threads. This doesn't appear to be so, in the case of a DLL call.

    Anyone knows how to force labview RT (PharLap) to do this ?

    Thanks.

    Walters

    Spinx

  11. Hi;

    I have the samilar problem as you...

    It's not that slow, but still slower than I want.

    Using the internet toolkit, http file transfers are blazingly fast. Transfers of hundreds of kilobytes appear instantaneous.

    Now the bad news is ftp from labview Rt is slow, though in this case, I have decided to use it as it's used only for my application installer,

    and appears quite slow. Though I have a http facility I can use build in, I want the installer to be very generic, such that other ftp tools could even be used to install the application.

    Now the good news.

    One thing that may speed things up a bit is not to use binary mode transfer for every thing.

    A second method is to transfer many files simultaneously... many commercial tools do this.

    The third method is to use the dos transfer tool build into windows(dos command line)

    You can call it from labview RT. I have had some problems with this approach though as

    sometimes my large transfers(even in binary mode) were truncated

    good luck

    Walters

    Spinx

    QUOTE(lraynal @ Mar 6 2007, 02:57 PM)

    Hi,

    I need to transfer file in both ways between a computer with LabVIEW RT inside and a computer with Windows and LabVIEW 7.1.

    I tried the Internet Toolkit, but the transfer speed is pretty slow... (4.54 seconds to transfer the NI-RT.INI file !!!)

    I tried a quicker way (0.054 seconds !) using the FTP command (via the SystemExec.vi), but can't copy directly my RT file to my D: drive !

    Does anyone has experience in FTP Transfer with RT target ?

    What is the best solution?

    Thanks for your precious help!

    Laurent

  12. Hi;

    Without some sort of architectural diagram, it's a bigt hard to follow what exactly you're doing, so I will just give you some advice based on a similar thing that I am doing...

    I am running an RT based system with fpga, motion and vision...

    Essentialy, UDP and TCP is used for both communications and logging of status.

    Every command received or processed by the instrument sequencer or internal command or feedback is logged to a file and also transmitted via UDP.

    This works so fast transfering small files to the instrument from the desktop appears instanteneous.

    The only difference here is I don't use shared variables. Be very carefull with shared variables.

    This system generates quite a bit of data and has no problems with the communications eating lots of memory...

    Make sure you're not reading or writing to synchronous controls or indicators anywhere in your program unintentionaly.

    What I would suggest is that you put your communications loop into a seperate thread(real easy in LV)

    In your communication thread, put your sender and receiver in seperate loops

    Use a bigger queue.

    Set the loop rate to about 40 ms

    Give the thread normal priority.

    replaced UDP read with 1000 ms timeout

    Make your communications module almost like an independent state machine, self regulating.

    In essence, try to have your code multitask.

    You can make a queick test to see where the problem may be by lowering the priority of your communications

    loop to see if anything changes.

    Post the code for more...

    Goodluck

    Walters

    Spinx

    QUOTE(ruelvt @ Mar 11 2007, 03:32 AM)

    what is the correct method for using UDP communications in RT? i have deployed a simple test program to my CompactRIO, the Communications loops for a time-critical application using both the FPGA and the RT processor... the communications loops alone take 45% CPU and 60% memory, leaving little headroom for the actual time-critical decision-making code. here are what i've tried:

    try1: the incoming msg loop contained a UDP Read with infinite timeout. write shared variables upon reciept of UDP datagram (this was turned off)... outgoing msg strings put in queue of size 5. an infinite timeout Dequeue emptied this queue onto the UDP port. i preallocated all arrays and strings, and did any complex Cluster->String bit/byte packing outside the loop and just used Replace String Subset in the loop. later i figured the Cluster>>String stuff was eating processor so i made a *.conf file to contain the strings and just loaded them in.

    try2: incoming msg loop replaced with timed loop running at 25 ms, replaced UDP read with 1 ms timeout. replaced outgoing msg loop with timed loop at 25ms, replaced DeQueue with 1 ms timeout.

    i recieve messages at about 10 Hz (40 Hz in future), and send at about 20 Hz. all data is replace-previous, but i do examine a byte to determine which data to replace. i cannot use Shared Variables since I am implementing a robotics standard (JAUS, www.mr2fast.net/jaus) that uses UDP and TCP right now... this RIO application must communicate with 5 other Executables running on Linux and WIndows, developed by several organizations.

    other RT considerations i've made after reading NI.com guidelines for RT programming:

    - i've put my loops inside subVIs, eliminated subVIs where possible.

    - i preallocate strings of max length and the just use "Replace"

    - most of my VI terminals are "Required" so the VI doesnt have to check whether to use Default or not

    ive searched this on NI.com and LavaG without much luck. to my knowledge my app is broken into the two suggested loops "TimeCritical" and "Communications"... hopefully others have had the same problem.

    Thanks!!!

  13. Hello;

    The simple answer to your question is yes, you can return an array from labview.

    Or you can pass a pointer to an array and labview will modify it.

    By default, labview passes values using pointers.

    Indeed the previous poster is correct in that you are limited to certain function definitions;

    but if you are the one writing the code, you can usually find a way to recast things so they work.

    The first thing to decide is who do you want to allocate the memory ?

    Labview or your C++ program ?

    Let's assume you assume you want your C++ to handle memory.

    Then you simply configure the call library function to pass the variables from labview by values.

    You can also do it the other way around and leave all the memory management to labview.

    Good luck.

    Walt

    thanks for your reply

    You said that it is need a function prototype like below:

    int MyFunction(double array[], int length);

    However,the function just return a int value not an array! Can LabVIEW return an array to c program in a time? How do it?

  14. Hi;

    In a nut shell, the answer is yes !

    However, you cannot enable/disable while loops dynamically...

    You can however have them idle till they receive some notifier to execute.

    You'll also need to have a state to stop the loop if you don't

    want to put a time out on the wait for notifier function.

    Without a timeout, this loop will wait forever and won't loop till...

    This type of structure is essentially a state machine.

    So look up labview help on state machines...

    good luck

    Walters

    Hi avid LV users!

    I want to verify if it's possible to run a top level vi

    with notifiers and 16 parallel while loops? and the control to stop the

    while loop is a global bool var...

    however the while loop can only be activated by a notifier.

    After the notifier is fired, it can run the loop, til it the Global Stop Var is

    clicked

    thanks!

  15. Hi Eric;

    DLL do work under labview RT 7.0, 7.1, 8.0...

    I have compiled some fitting routines with Labwindows CVI and have them running now for two

    years under the various versions above without a problem.

    However, what I had trouble with was the fact that my C & C++ routines had some very complex function type definitions,

    therefore I had to write a wrapper to call them.

    I built some function panels, and exported functions of interest and abview directly

    generated the DLL . I then used the function panels to automatically generate a labview import library

    by treating my DLL as a driver.

    When building the cvi install, use the embedded laview runime libray(very small dll)

    This will save space otherwise thebig one will work also.

    With some tweaking you can get an activeX packaged as a dll to run also...

    For your particular situation if wsock32.dll is standalone and doesn't make a calls to

    to other system files, then it may work( it probably is an activeX and will be trouble)

    Best option is to compiled your code and build a distribution, ask the compiler to include

    all the dependent libraries; the copy them to labview for trials.

    Walters

    SpinX Technologies

    Has anyone had any luck calling external code (DLL) in LabVIEW RT?

    I have an application that requires the use of an external library written in C++ which makes use of the STL and some Win32 API calls (wsock32.dll). I am exploring the idea of creating a C wrapper DLL for the library which would enable it to be called by LabVIEW, but I'm not sure that the DLL would run under LabVIEW RT.

    I know NI has a DLL checker on their website that verifies the DLL as capable of running in LabVIEW RT but they still don't guarantee that it will work. I haven't built a DLL that wraps the code yet to test it out. (still not sure if I want to go this route)

    Does anyone have a list of the supported or unsupported libraries in the Ardence Phar Lap ETS (RTOS)? And has anyone been able to call a DLL from LabVIEW RT that makes a call to the WIN32 API? It may also be possible to strip out the Win32 stuff. Has anyone had any experience just calling pure code in a DLL from LabVIEW RT?

    Thanks in advance for your help.

    Eric

  16. Hi;

    Thanks for your reply. Infact, I had perused the article in question and unfortunately it

    doesn't clarify that particular question which is binding a shared variable to a front

    panel of a realtime system.

    I had no problem getting it to work on the desktop under windows xp.

    According to Ni, you're suppose to be able to do it on the RT though they don't advice it as the

    they claim the panel may not always be in memory.

    I have tried it from the dev environment with the panel open; and therefore in memory ?

    and it doesn't work. No error is given.

    It simply act as two different variables.

    Programmatically when I can change and read back the value of the shared variable but this changes don't reflect on the control front panel or vice versa; whereas on windows

    This works great...

    <i hope this details helps...

    Walters

    Howdy,

    Since my name and extension appear in this thread, I felt obliged to reply :)

    The FAQ in the post above was a working document and not meant to be shared externally, meaning there could be some mistakes in it. I would instead point you to an excellent application note on the NI Developer Zone called Using the LabVIEW Shared Variable. It answers most questions about the shared variable feature in LabVIEW.

    As always, the NI support team is ready to help if there are any unanswered questions on the shared variable. Of course, if the shared variable is making your LV development life wonderful, I love hearing success stories, too.

    Best regards,

    Gerardo Garcia

    National Instruments

    LabVIEW Real-Time Product Manager

  17. Hi Fellows;

    This seemed to me like a good send on a realtime target;

    I have tried it, and the shared variables themselves do work on the RT,

    but for now, binding a shared variable to a front panel control seemed

    impossible.

    NI does hint that it can be done; with the caveat and advice that since

    one can not be sure that the front panel is loaded in memory, this will not

    work.

    Now, I tested it under windows and it works great and I can bind a front

    panel control and shared variable; in fact I can simply drag the shared

    variable onto the front panel and it works.

    My aim is to create a "double throw" control (like a double switch) where

    you can turn on or off the lights from two places... or change the numeric

    value of a control from two places...

    The purpose is to be able to control an instrument from a programmatic

    text interface (script) or directly by clicking buttons on the user interface without

    switching modes in such a way that the interfaces are always synchronized.

    Currently I have this implemented using a rt buffer and front panel interface

    I check on every loop to see if the buffer has something in i and and if yes,

    update the control, else use the value in the control. this works great until you have

    30 controls on the user interface and it becomes a drag on performance since

    I am using the slowest RT...

    Any suggestions or clarifications on how to bind front panel on RT ?

    Walters

    SpinX Technologies

    post-633-1139837973.jpg?width=400

  18. Thanks Sima;

    They told me the same thing...

    I do understand that for a distributed system, this capability may cause problems,

    but for us, we want our instrument to have no personality until we run the

    software on the desktop...

    This is very convenient during development of embedded RT system.

    Well, I tried to get Vi Server to download and launch a vi on the RT

    system from the desktop but it didn't work right away and I haven't yet had time

    to look at it again... This should be possible as it's probably what Ni does within the

    development environment right now.

    We'll see.

    Walters

    Hi Walters, we've been battling this same issue for a few days now. After some phone calls to NI, one of the application engineers told me that this option may have actually been removed from 8, except we don't know what has replaced it, if anything. There may be a workaround, which would be to make a separate VI that would launch the RT executable on the RT system but I haven't tried it out yet. Although this (Deploying and Launching a Real-Time Application) is for older versions of LabVIEW, the part on "Launching the executable manually from the host computer via VI Server" may work. I'll be trying that next.

    I'll post more as I hear back from NI.

    Sima

  19. Well,

    This problem was solved by breaking the communication routine into two. It became clear that the

    communication channel whatever the cause could not sustain the data trasfer rate in due in part

    to the larged number ofront panel objects to be read from the fpga.

    The sepration allowed the silencing of certain parts when needed. This was an interim solution

    while waiting for DMA transfer which are here now in LV8.0

    Wally

    The whole thing is event driven. For example it the infra laser fires up, the 2 photo multipliers fire up and generate 16834 samples or for each segment that a motor goes through generates timing pulses...

    Even slowing down this events to rediculous slow speed didn't make a difference; or putting delays in

    between transmitting each U32 number via synchronous variable...

    I am also reading about 2 dozen indicators off the FPGA frontpanel

    Which suggest that if the problem is bandwidth it must happen just momentarily...

    The corruption is consistent and always at the same place...

    My thinking though is that for the 7831R, using a synchronous display must definitely put a constrain on

    the speed and or bandwidth effectively reducing both or one or the other...

    Wally

  20. :headbang: Hi Fellows;

    I just got done playing with Labview 8.0 and converting some current projects to it.

    I was looking forward to using some of the great features like shared variables and

    DMA transfer from fpga ... until I realized that I Labview 8.0 application builder doesn't

    allow you to built standalone installer for RT targets.

    This in itself to me is like a setback.

    I have also not found any documentation on how to build a standalone RT executable

    that can be launched from the host to download itself to the RT like in 7.1 when launched.

    This last item is almost a neccesity for me during development, testing and even for application

    distribution as we don't have to go to a customer's site to upgrade his applications. This one at

    least should be possible...

    Can anyone here please shed more light on this ?

    I can't foresee installing full dev labview for the tester

    Otherwise, one would have to write a custom utility ftp utility to do this which is also not

    too elegant.

    Walters

  21. The whole thing is event driven. For example it the infra laser fires up, the 2 photo multipliers fire up and generate 16834 samples or for each segment that a motor goes through generates timing pulses...

    Even slowing down this events to rediculous slow speed didn't make a difference; or putting delays in

    between transmitting each U32 number via synchronous variable...

    I am also reading about 2 dozen indicators off the FPGA frontpanel

    Which suggest that if the problem is bandwidth it must happen just momentarily...

    The corruption is consistent and always at the same place...

    My thinking though is that for the 7831R, using a synchronous display must definitely put a constrain on

    the speed and or bandwidth effectively reducing both or one or the other...

    Wally

    The communication rate with interruption probably not exeed 1 Mo / s,  it's very slow...  What's your data sampling rate ?

    4847[/snapback]

  22. Hello Fellows;

    I am having a few problems and I am wondering if anyone has encountered similiar things or knows why...

    I have a setup with a 7831R FPGA board and 8145 RT board. Basically one portion of the program gets

    data from the fpga and sends it to the realtime board using interrupts and synchronous variable...

    On the realtime system, the data is received and put into a realtime fifo for processing elsewhere.

    I have about 3 types of data packages that are sent from the fpga. One consist of 48 elements, another of 128

    and one of 16384. All of then are unsigned 32bits numbers.

    The first and the last one appear to come through without a problem even at high speed.

    The moment try sending the package of 128 units of unsigned 32bits numbers, a very high peak noise is introduced

    into all the other data streams. I have even wired constants to all data streams and the same problem occurs.

    Slowing down doesn't help. I put in code to check the data just before it's written to the synchronous display

    and it's error free. I have recreated the fifos and memory blocks on the fpga for no avail...

    The interesting thing is that if I transmit just one data stream at a time, things work fine. The moment I start

    alternating, the problem returns.

    Sending the data using interrupt only for notification, regular display and binary handshaking works fine but it's slow.

    My suspicions are perhaps I am exceeding the rate at which a synchronous display can be written and read ?

    Any one knows ???

    Walters

    SpinX Technologies

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.