Jump to content

wallyabcd

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by wallyabcd

  1. While Rolf and other responders are entirely correct, I have in the past attempted similar exercises with some measure of success by limiting myself to certain types of interfaces at a time such as RS 232 and RS 422 or Ethernet like interfaces and using this pattern: Msg -> Msg+ Envelope -> Interface”. In fact, I last created such a thing a few months ago for an instrument simulator that supports ASTM LIS2 A2 via RS 232 as well as Ethernet. In General, I structure the program as follows: An initial configuration program to open and configure as many interfaces as needed and wait for communications, and to close them on command. This program was generic enough, taking all the usual parameters of RS 232 if that was the interface or Ethernet if that was the interface. This program can be standalone or part of the State machine described below. The ASTM formatting will be done in the state machine below. The next step is to do a state machine that detects an incoming connection or outgoing connection request, then opens and initialize the appropriate ports or channels. This state machine should have at least 6 states. Incoming connection request, outgoing connection request, Subsequent Incoming messages from a device, Outgoing communications for a device, Communication Termination request etc. Messaging then proceeds as follows: Incoming: Interface->Source->Msg Reception->Envelope Decoding->Msg Decoding->Response/Handling Outgoing: Destination->Message->Envelope Encoding->Interface->Msg Transmission->Status Envelope Decoding is stripping the communicated message from Envelope (Comm protocol + ASTM format) Envelope Encoding means taking the Msg to be transmitted and formatting it for transmission The envelope(s) will contain the Destination+communications configuration + the ASTM LIS A2 overlay formatting etc. The MSG Decoding is simply a Case structure with all the expected commands or type of messages that are expected with one reserved for exceptions, and how to decode and handle them. The Msg Encoding is a case structure for all the different type of commands or messages that can be transmitted and how to format them. From here on, you can see that the case structure names can be loaded dynamically from a file based on configuration or replicated for each type of interface, or even a case where a new case structure corresponding to a new interface or instrument type could be added when needed. In general, this pattern of “Msg -> Msg+ Envelope -> Interface” is very flexible and powerful. I originally used it about 12 years ago to enable real time priority based communications on an implantable surgical instrument with distributed monitoring (15 clients with 1 master controlling the instrument) It can greatly simplify the creation of a simulator as you can switch the envelopes or interfaces or messages even dynamically. Sorry no example as I will have to recreate it, maybe in the future if enough interest and time… The important element is the pattern and understanding it, most Labview programmers can create something adequate. I hope this can get the juices flowing for more productive discussions
  2. Specialist in Software Development& Engineering, Electronics Instrumentation, Medical devices, Dev / Project Management, V&V; working with start-ups and established companies to rapidly develop innovative products Labview, CVI C/C++, Full dev lifecycle and more, available in Switzerland for employment or contract work Wally Geneva / Vaud / Switzerland
  3. Thanks Asbo ! This is great and may make my day tomorrow... I hope it can work on usb mass storage for my labview dongle ! Later I will let your guys know
  4. Hi gentle men ! Who can do me a big favour and convert the vi to labview 8.5 or give me the parameters to call it from labview 8.5 without the wrapper ? This could solve my little pesky problem Thanks Wally ' timestamp='1252322442' post='65482'] Well. Your drive also seems resistant to WMI as well The pervious DLL used 3 methods, the final one being querying the WMI database. Hmmm. Digging deeper........
  5. Hello Bruce; Sorry to hear of your problem... I suspect it may be similar to mine. You didn't mention though, is your target an RT system ? If you're then read on... I installed labview 8.5.1 and open a project that worked fine under labview 8.01. Now after resolving some issues concerning the fpga, I was able to compile my project. After updating my realtime chassis, I copied all the files to the usual places. Immediate crash, with message on a connected screen: MKL FATAL ERROR: Cannot load neither mkl_p3.dll nor mkl_def.dll I decided to replace all my dependencies with the older versions and this worked. I was able at least to boot my application. Then one by one I started exchanging them for the new ones until the RT system failed. At which point I knew it was "lvanlys.dll", yes the advance analysis library. To make a long story short, the App builder for some reason an upgraded version of this file in 8.5.1 which no longer appears able to work under the RT system. Upon searching the NI installation directory on my PC, I found an RT version of this file named lvanlys_RT.dll Replacing the old lvanlys.dll with the above file also worked, but putting both of them at the same time on the Rt fails. It seems NI is looking for the old file first and then if it doesn't find it, the second one. Trying to run my program in the dev environment makes it behave just like yours. So you can try replacing the above file like I did. QUOTE (bmoyer @ Mar 27 2008, 05:44 PM)
  6. Hi Alexandro; True, Lua can be adapted... However I was looking for an off the shelf solution that would allow me to run it on labview RT. Testing the DLL's that come with it with the RT DLL compatibility program fails... http://digital.ni.com/public.nsf/allkb/0BF...6256EDB00015230 This means remaking lua as a DLL and removing all the windows depencies. Just even to see if it will work, I attempted to call some of the DLL included in lua from the development environment targeted to an RT system, and it didn't work. I then tried to deploy and it wouldn't... The main aim of me embeddeding lua was to make it easy to sequence a complex instrument control, and allow the control programs to be modified without touching the base code. I can not justify the effort needed to read the relevant documentation in order to modify it; so I have done a very primitive & limited labview implementation for now. It's not fully satisfactory but it will have to do as I know that long term, we need something more and I am trying to avoid reinventing the wheel here. It even mentions on Lua's website the need to support RT on a future version... My needs are essentiall a DLL or two with exported functions that can be call from labview to either intepret a line of script or a block and return the results to labview. UnderC came the closes, but just won't embedd because of the depenendcies of the windows memory manager. Unfortunately calexpress doesn't quite do it on RT either. Thanks Walters QUOTE(Ale914 @ Jul 31 2007, 03:14 PM)
  7. Hi; Anyone aware of any script intepreter(C/C++...) or processor that can run under labview Rt ? The problem with the current intepreters like Ch, Lua, Python, UnderC are the dependency problems with other libraries like Kernel32.dll and so forth...as such fail to work on RT. I got UnderC to work under labview, but failed under RT because of kernel32.dll... Even adding the file to the project, it still fails; in anycase, I wouldn't want to load another memory manager on the RT I am looking for something that comes as a DLL with header files... I have a control application that runs loads and runs very large text script for control... I also have a very simple language (intepreter) that understand the scripts. It was initialy kept very simple but now, we would like to extend the language, add loops, local variables in the scripts and inline evaluation of operations. for example m1=move_stage(position=100) m2=read_stage_position() m3=m1-m2 if(m3>5) log_msg(position error) blah blah blah... else log_msg(position correct) blah2 blah2 blah2... end Ofcourse what is not obvious from the above example is that only the if statement is processed by the external intepreter while the rest of the lines are processed by our build in intepreter. The idea is to farmed out the loops, branching, and mathematical processing to the intepreter such that I can remake the application by changing the script. My labview functions are already scriptable... Needless to say this a bit more complicated that it appears because I have to cache the results from each call in a stack ... Walters
  8. Hi; Simply setting the TCP transfer to passive mode solves this slowness problem... My installation upload time went from 16 minutes to 1:40 seconds ! Walters QUOTE(wallyabcd @ May 22 2007, 11:59 AM)
  9. Hi Jean; You can install both the labview ETX, RTX modules. The ETX can be any old desktop with some few limitations on the choice of Lan card supported Install it on a spare PC and you're ready to go, and it will behave almost the same as a realtime platform. A few functions are not supported or different on each one... Additional hardware is not so obvious and depends onit's exact nature. See NI website for how to... Walters QUOTE(jlau @ Jul 10 2007, 03:31 PM)
  10. Hi; I presume yo are using the binary mode to log the data; otherwise you would not have this problem ! Logging the data in ascii solves this problem; but then has some overhead. I have been doing this for the last one year and with proper thread management I get good performance on the RT using a highly distributed program with heavy data logging. If you want to go with the binary road, things are more difficult as in an embedded operations, your instrument needs to recover automatically in case of power failure or unprepared power down. If you don't close the file properly by flushing the stream, then you'll get an error opening the file under windows... In this case like one of the users mentioned you have to flush after every write to force the system to commit the data to the drive. This also has some small overhead too. In general I find that unless you are doing data acquisition, using the ascii format is worth the price in most cases. I was very hesitant to do this at the beginning, but I appreciate being able to look at the log files with any simple viewer. I regular produce text files of 30 megabytes without problems either or overly slwing down of the process. Walters Spinx QUOTE(JustinThomas @ May 11 2007, 06:55 AM)
  11. Hi Ben; The code is configured to use the other1 thread in Labview RT and not the UI thread and the function calls are marked as reentrant as we call them through a sequencer and there is no risk of multiple calls... Therefore the icon is colored yellow... QUOTE(Ben @ Jun 28 2007, 03:07 PM)
  12. Hi; I am calling a computation DLL(written in Labwindows) from labview RT. The DLL is marked as reentrant and given it's own thread. When certain fitting functions are called from DLL library, it seems to block the response from the rest of the threads momentarily. I was under the impression that this won't happen as the DLL has it's own seperate thread and that under all circumstances, labview RT will allocate some time to all six base threads. This doesn't appear to be so, in the case of a DLL call. Anyone knows how to force labview RT (PharLap) to do this ? Thanks. Walters Spinx
  13. Hi; I have the samilar problem as you... It's not that slow, but still slower than I want. Using the internet toolkit, http file transfers are blazingly fast. Transfers of hundreds of kilobytes appear instantaneous. Now the bad news is ftp from labview Rt is slow, though in this case, I have decided to use it as it's used only for my application installer, and appears quite slow. Though I have a http facility I can use build in, I want the installer to be very generic, such that other ftp tools could even be used to install the application. Now the good news. One thing that may speed things up a bit is not to use binary mode transfer for every thing. A second method is to transfer many files simultaneously... many commercial tools do this. The third method is to use the dos transfer tool build into windows(dos command line) You can call it from labview RT. I have had some problems with this approach though as sometimes my large transfers(even in binary mode) were truncated good luck Walters Spinx QUOTE(lraynal @ Mar 6 2007, 02:57 PM)
  14. Hi; Without some sort of architectural diagram, it's a bigt hard to follow what exactly you're doing, so I will just give you some advice based on a similar thing that I am doing... I am running an RT based system with fpga, motion and vision... Essentialy, UDP and TCP is used for both communications and logging of status. Every command received or processed by the instrument sequencer or internal command or feedback is logged to a file and also transmitted via UDP. This works so fast transfering small files to the instrument from the desktop appears instanteneous. The only difference here is I don't use shared variables. Be very carefull with shared variables. This system generates quite a bit of data and has no problems with the communications eating lots of memory... Make sure you're not reading or writing to synchronous controls or indicators anywhere in your program unintentionaly. What I would suggest is that you put your communications loop into a seperate thread(real easy in LV) In your communication thread, put your sender and receiver in seperate loops Use a bigger queue. Set the loop rate to about 40 ms Give the thread normal priority. replaced UDP read with 1000 ms timeout Make your communications module almost like an independent state machine, self regulating. In essence, try to have your code multitask. You can make a queick test to see where the problem may be by lowering the priority of your communications loop to see if anything changes. Post the code for more... Goodluck Walters Spinx QUOTE(ruelvt @ Mar 11 2007, 03:32 AM)
  15. Hello; The simple answer to your question is yes, you can return an array from labview. Or you can pass a pointer to an array and labview will modify it. By default, labview passes values using pointers. Indeed the previous poster is correct in that you are limited to certain function definitions; but if you are the one writing the code, you can usually find a way to recast things so they work. The first thing to decide is who do you want to allocate the memory ? Labview or your C++ program ? Let's assume you assume you want your C++ to handle memory. Then you simply configure the call library function to pass the variables from labview by values. You can also do it the other way around and leave all the memory management to labview. Good luck. Walt
  16. Hi; In a nut shell, the answer is yes ! However, you cannot enable/disable while loops dynamically... You can however have them idle till they receive some notifier to execute. You'll also need to have a state to stop the loop if you don't want to put a time out on the wait for notifier function. Without a timeout, this loop will wait forever and won't loop till... This type of structure is essentially a state machine. So look up labview help on state machines... good luck Walters
  17. Hi Eric; DLL do work under labview RT 7.0, 7.1, 8.0... I have compiled some fitting routines with Labwindows CVI and have them running now for two years under the various versions above without a problem. However, what I had trouble with was the fact that my C & C++ routines had some very complex function type definitions, therefore I had to write a wrapper to call them. I built some function panels, and exported functions of interest and abview directly generated the DLL . I then used the function panels to automatically generate a labview import library by treating my DLL as a driver. When building the cvi install, use the embedded laview runime libray(very small dll) This will save space otherwise thebig one will work also. With some tweaking you can get an activeX packaged as a dll to run also... For your particular situation if wsock32.dll is standalone and doesn't make a calls to to other system files, then it may work( it probably is an activeX and will be trouble) Best option is to compiled your code and build a distribution, ask the compiler to include all the dependent libraries; the copy them to labview for trials. Walters SpinX Technologies
  18. Hi; Thanks for your reply. Infact, I had perused the article in question and unfortunately it doesn't clarify that particular question which is binding a shared variable to a front panel of a realtime system. I had no problem getting it to work on the desktop under windows xp. According to Ni, you're suppose to be able to do it on the RT though they don't advice it as the they claim the panel may not always be in memory. I have tried it from the dev environment with the panel open; and therefore in memory ? and it doesn't work. No error is given. It simply act as two different variables. Programmatically when I can change and read back the value of the shared variable but this changes don't reflect on the control front panel or vice versa; whereas on windows This works great... <i hope this details helps... Walters
  19. Hi Fellows; This seemed to me like a good send on a realtime target; I have tried it, and the shared variables themselves do work on the RT, but for now, binding a shared variable to a front panel control seemed impossible. NI does hint that it can be done; with the caveat and advice that since one can not be sure that the front panel is loaded in memory, this will not work. Now, I tested it under windows and it works great and I can bind a front panel control and shared variable; in fact I can simply drag the shared variable onto the front panel and it works. My aim is to create a "double throw" control (like a double switch) where you can turn on or off the lights from two places... or change the numeric value of a control from two places... The purpose is to be able to control an instrument from a programmatic text interface (script) or directly by clicking buttons on the user interface without switching modes in such a way that the interfaces are always synchronized. Currently I have this implemented using a rt buffer and front panel interface I check on every loop to see if the buffer has something in i and and if yes, update the control, else use the value in the control. this works great until you have 30 controls on the user interface and it becomes a drag on performance since I am using the slowest RT... Any suggestions or clarifications on how to bind front panel on RT ? Walters SpinX Technologies
  20. Thanks Sima; They told me the same thing... I do understand that for a distributed system, this capability may cause problems, but for us, we want our instrument to have no personality until we run the software on the desktop... This is very convenient during development of embedded RT system. Well, I tried to get Vi Server to download and launch a vi on the RT system from the desktop but it didn't work right away and I haven't yet had time to look at it again... This should be possible as it's probably what Ni does within the development environment right now. We'll see. Walters
  21. Well, This problem was solved by breaking the communication routine into two. It became clear that the communication channel whatever the cause could not sustain the data trasfer rate in due in part to the larged number ofront panel objects to be read from the fpga. The sepration allowed the silencing of certain parts when needed. This was an interim solution while waiting for DMA transfer which are here now in LV8.0 Wally
  22. :headbang: Hi Fellows; I just got done playing with Labview 8.0 and converting some current projects to it. I was looking forward to using some of the great features like shared variables and DMA transfer from fpga ... until I realized that I Labview 8.0 application builder doesn't allow you to built standalone installer for RT targets. This in itself to me is like a setback. I have also not found any documentation on how to build a standalone RT executable that can be launched from the host to download itself to the RT like in 7.1 when launched. This last item is almost a neccesity for me during development, testing and even for application distribution as we don't have to go to a customer's site to upgrade his applications. This one at least should be possible... Can anyone here please shed more light on this ? I can't foresee installing full dev labview for the tester Otherwise, one would have to write a custom utility ftp utility to do this which is also not too elegant. Walters
  23. The whole thing is event driven. For example it the infra laser fires up, the 2 photo multipliers fire up and generate 16834 samples or for each segment that a motor goes through generates timing pulses... Even slowing down this events to rediculous slow speed didn't make a difference; or putting delays in between transmitting each U32 number via synchronous variable... I am also reading about 2 dozen indicators off the FPGA frontpanel Which suggest that if the problem is bandwidth it must happen just momentarily... The corruption is consistent and always at the same place... My thinking though is that for the 7831R, using a synchronous display must definitely put a constrain on the speed and or bandwidth effectively reducing both or one or the other... Wally
  24. Hello Fellows; I am having a few problems and I am wondering if anyone has encountered similiar things or knows why... I have a setup with a 7831R FPGA board and 8145 RT board. Basically one portion of the program gets data from the fpga and sends it to the realtime board using interrupts and synchronous variable... On the realtime system, the data is received and put into a realtime fifo for processing elsewhere. I have about 3 types of data packages that are sent from the fpga. One consist of 48 elements, another of 128 and one of 16384. All of then are unsigned 32bits numbers. The first and the last one appear to come through without a problem even at high speed. The moment try sending the package of 128 units of unsigned 32bits numbers, a very high peak noise is introduced into all the other data streams. I have even wired constants to all data streams and the same problem occurs. Slowing down doesn't help. I put in code to check the data just before it's written to the synchronous display and it's error free. I have recreated the fifos and memory blocks on the fpga for no avail... The interesting thing is that if I transmit just one data stream at a time, things work fine. The moment I start alternating, the problem returns. Sending the data using interrupt only for notification, regular display and binary handshaking works fine but it's slow. My suspicions are perhaps I am exceeding the rate at which a synchronous display can be written and read ? Any one knows ??? Walters SpinX Technologies
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.