Jump to content

xavier30

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by xavier30

  1. Hi Onno, I found myself in the same situation as you a while back. (this was before the VIPM came out, if it had existed at the time we probably would have used it) We (a team of researchers within a larger organization) where and are doing dedicated, institute specific modules in labview for integrating NI equipment in an existing infrastructure. This has after several projects and years of development turned into a framework that we now (try to ) maintain (and update). To find a easy way of distributing this framework to all the "users" we also thought that svn could get the job done in an easy way even dough it is kind of slow and has strange behavior from time to time (i guess the perks we get are more user related than by the versioning system itself) What we did was the following: First for the framework developers we agreed on an naming syntax and structure on the repository (using the typical "trunk", "tags" and "branch") where everything stored in the "tags" folder/repository should be considered a release and never modified. If one finds a bug, the tag is copied into branch, fixed, and then re-published as a bug/minor fix (plus the fix if applicable is fixed in the trunk). Sometimes we fix things directly in the trunk but it might have advanced beyond the released version that needs fixing so branching is necessary. The tags released libraries/tools would have the folder structure, menu files and resources needed for direct integration into the labview file/folder structure. - This "solves" the problem of someone committing partially working code. Second we created a simple svn installer (at first by simply using the system exec in labview to execute svn commands in the terminal but later by making a light weight svn client in c++/c and using the labview external code node) that would do a svn export and not a svn checkout. We did this to avoid users having all the .svn synch stuff in their labview development environment (remember these guys are researchers, meaning if they can they will fiddle with absolutely everything and of course break it at some point ). The Installer stores all needed info in a config file with the modules in user.lib or wherever it is installed. In this file we keep track of where/when/what is installed, versions, vis/files etc. and every time the users launch labview we run the installer in background doing a quick check if there are any new versions of the modules they have installed and if so give them a prompt to install the latest release. if the file is deleted we do a "recursive list folder" in the most common install locations and if we find some "naming conflicts" we try to sort these out (we try to use .lvlib and .llb "containers" to avoid this but it still doesn't work 100% ) The installer checks various things and tries to cope with it (local modifications, custom installation location, changes in palette .mnu file etc) and there are surely many perks to sort out but in general i would say it does what we expect it to do and ease the job of distributing the framework. - I guess what i am trying to say here is that in essence a simple installer can easily be realized using svn, and if the environment you are targeting is well known you might pull it off with just a few hours of work. On the other hand if you have several users and have to cope with all the unforeseen/unique things in their environments, you might be better off selecting a "off the shelve" installer (like the VIPM) that has all ready solved many of these problems (dough i do find fiddling with these problems fun from time to time ) - X
  2. Hi, First off, i don't know if this post is in its correct category, or if the question has been answered before. I have been asking my dear friend google quite a bit but this time he/she unfortunately failed me. I was playing around with the labview provider interface (in lack of a better name) today, trying to create my own custom right click menus, to launch various labview plugins/scripts. What i did was taking an existing provider/plugin such as the SCC tool located in <LabVIEW dir>/resource/framework/provider/SCC , copy it, remove everything i didn't need from the toolkit (leaving only the interface vis) and point towards my own custom menu code/vi (using the "provider/API" vis to manipulate the menus) i kept the .ini file (Copy) intact. This all worked fine and i was ready to take the next step: creating everything from scratch "by my self", but here is where i got into trouble. What used to work just fine when stealing the SCC interface vis now didn't work at all. After some debugging i think i have narrowed it down to the signature used in the ini file that one have to add in the custom provider folder. This is the ini file i used/stole from the SCC folder: [Provider] SupportedInterface=SCC_Interface ProviderInterfaceVI=SCC_Provider_Interface.vi ItemInterfaceVI=SCC_Item_Interface.vi GlobalItemInterfaceVI=SCC_Global_Interface.vi IsPrimary=0 LicenseName=LabVIEW_Pro LicenseVersion=10.0.0 LicenseRestrictions=DisableDemoIfActivated InterfaceVersion=1.0 Signature=5X259R73LC93CCTK3LWBRTLJBS52CR9T My question to the bright people of Lava is the following: How the (##insert inappropriate words here##) do one generate the "Signature" key/checksum/XXX ??? I was hoping that it would be a MD5 or something similar but the MD5 generator in LabVIEW (file to MD5 thingy) only generates ASCII HEX values and this string (in the Signature field above) contains characters above "F". Also i don't know/understand for what portion of the file the signature is generated. Is the "Signature" field/line excluded when the calculation is done and then appended to the file? If anyone could shed some light on this it would be highly appreciated! (i'll bring kudos on next NI week!!) (P.S i have had a look at Jim's/JKI's "Right click framework" and considered it, but it didn't quite work so well under OS X and Linux as i had hoped + i was really hoping to get some answers for my own understanding on how LabVIEW calls the "providers" to see if my own interpretation so far is correct.) Regards, X
  3. Dear Lavaers I guess this question might be a recurring issue when porting LabVIEW applications between platforms, but i was hoping some of the bright minds here had a solution for this problem. I was trying to set up a system where users could run a windows (XP) built LabVIEW application, running on a PXI (the PXI currently runs windows, but will probably be running Phar Lap when every thing is finished), and interact with the applications on the PXI trough remote panel from several linux clients (running Red Hat scientific linux 5). the labview web server on the PXI hosting the VIs work perfectly, but i run into some font issues on the linux client side, causing unwanted cosmetic changes when loading/running the user interface. In the light of this, i have a few questions: - have anyone tried porting windows fonts to a red hat based linux system? if so, is it possible to do this without being root, but enforce the necessary setup trough environment variables when calling labview (the reason i ask the latter is because if i don't have root access to all the client machines) - Are there any "good" fonts i could use, that would at least interpolate seemingly fine on both platforms? Unfortunately i am quite inexperienced when it comes to fonts on different platforms, so i don't even know what the system defaults are I got a few tips from the old lava forum forcing the following in the labview ini file (got this of the goolgle cached pages): appFont=""Tahoma" 13" dialogFont=""Tahoma" 13" systemFont=""Tahoma" 13" and also from NI: "Use pixel-based font sizes—Linux Causes LabVIEW to use pixel size instead of point size to select which fonts to load. This checkbox is unchecked by default. Placing a checkmark in this checkbox causes text to be smaller on large (100 dpi) displays but results in higher-quality cross-platform VIs." ( http://zone.ni.com/reference/en-XX/help/371361B-01/lvdialog/miscellaneous_options/ ) Am i looking at this the wrong way, or are there any other things that i could do to ensure proper rendering on all platforms? Any feedback would be appreciated X
  4. Yay! finally lava is back!! I dreaded that my No 1 LabVIEW resource was gone I have been using googles cached copy of the lava pages for the last month XD. (by the way could you perhaps get some of the missing pieces from them? if there are any.) I'm gonna have a few cold brown ones tonight just in the name of LAVA 2.0. and all you guys who got this great site back up!
  5. I been doing some debugging now for a while, and everything so far indicates that the problems we are getting in this case is coming from using the g++ 3.4.x compiler which links to libstdc++.so.6 when using functions like "list" and "vectors" inside the library (we are not re-shaping memory allocated by labview, but we are using c++ functions in other libraries called by mine to do some of the work). I think i managed to reproduce the fault by making a small test library where i create some c++ functions, not doing anything, just initialising things like vectors and lists, and then i compile a shared library with g++ 3.4.x and call it from labview. when i compile this vi and run it as a executable. it crashes, but when i compiled everything with g++ 3.2.x it seemingly work fine. so for me it seems that the labview 8.6 runtime libraries (liblvrt.so.8.6.0) is not compatible with g++ 3.4.x compiled libraries? If anyone could verify or depen this a bit, it would be highly appreciated. Cheers X P.S i have included the test code and makefile
  6. QUOTE (Adam Kemp @ Mar 30 2009, 07:11 PM) Thanks Adam, I unfortunately haven't gotten the sources of the code causing the double free problem yet (not my part of the code, and it might take some time but i found a nifty little tool called "die hard" that i'll check out in the mean time: http://www.diehard-software.org/ to see if i can suppress the error until i get the sources and hopefully manage to fix the problem. not really the solution i wanted, but if it works, i'll use it in the meantime Thanks for all the inputs though. X
  7. Thanks Rolfk and Adam To Rolfk : i tried removing all inputs and outputs between labview and the library, and instead piping all error messages and events in the library to the stdio. I also changed all inputs to constants in the c wrapper itself, re-compiled and tried to see if the labview executable would crash just by calling the library, which it did. To Adam: i will see if i can get some more data by enabling the full error checking. I did a small debugging session with gdb and managed to get a backtrace from the shared library and its callers. it seems that the problem could come from an exception created by what you seen in line 12 from the "curl wrapper", causing the "rbac" object to be deleted. What i also now realise is that my shared library is compiled using the standard libstdc++.so.6 while LabVIEW normally includes libstdc++.so.5? I will try setting up an environment, linking all the sources with the included standard libraries from ni. #0 0x00b2f7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00b70825 in raise () from /lib/tls/libc.so.6 #2 0x00b72289 in abort () from /lib/tls/libc.so.6 #3 0x00ba4cda in __libc_message () from /lib/tls/libc.so.6 #4 0x00bab56f in _int_free () from /lib/tls/libc.so.6 #5 0x00bab94a in free () from /lib/tls/libc.so.6 #6 0x00933b31 in operator delete () from /usr/lib/libstdc++.so.6 #7 0x0082ca0b in __gnu_cxx::new_allocator<std::_List_node<std::string> >::deallocate (this=0xbfffca0c, __p=0x8331888) at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/ext/new_allocator.h:86 #8 0x0082c9d4 in std::_List_base<std::string, std::allocator<std::string> >::_M_put_node (this=0xbfffca0c, __p=0x8331888) at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/stl_list.h:315 #9 0x008300bc in std::_List_base<std::string, std::allocator<std::string> >::_M_clear (this=0xbfffca0c) at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/list.tcc:78 #10 0x0082fff8 in ~_List_base (this=0xbfffca0c) at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/stl_list.h:330 #11 0x0082ffbd in ~list (this=0xbfffca0c) at ../rbac/LoginModule.h:40 #12 0x0083852f in RBAC::CurlWrapper::curlRequest (servers=@0x83187d0, request=@0xbfffcadc, replyHeader=@0xbfffca8c, replyData=@0xbfffca7c) at CurlWrapper.cpp:202 #13 0x0082fc3e in RBAC::LoginModule::getAndSaveToken (this=0x83187c8, query=@0xbfffcadc) at LoginModule.cpp:51 #14 0x008308c6 in RBAC::ExplicitLoginModule::login (this=0x83187c8) at ExplicitLoginModule.cpp:64 #15 0x0082d6f3 in RBAC::LoginContext::login (this=0xbfffcc9c) at LoginContext.cpp:166 #16 0x0082dcad in RBAC::LoginContext::login (application=@0xbfffccfc, userName=@0xbfffcd1c, password=@0xbfffcd3c) at LoginContext.cpp:254 #17 0x00826bd0 in Token (pwd=0x879dcbc "MYFakePassWord", back=0x87b4464 "") at Test.cpp:49 #18 0x0832c4a4 in ?? () #19 0x0879dcbc in ?? () #20 0x087b4464 in ?? () #21 0x00000000 in ?? ()
  8. QUOTE (Adam Kemp @ Mar 27 2009, 04:36 PM) Hi Adam and thanks for your reply! I have started the process of narrowing down the problem, and it now seems to be coming from the shared library. my problem is that this implementation is huge, and consists of several developers in different countries, so getting hold of all the sources for re-compilation and debugging isn't always too easy. I agree that it might be a bit to sudden blaming the application builder, but i just find it a bit strange that everything seemingly runs fine in labview 8.2.1 executables. will try to do some more debugging and let you know cheers X
  9. Dear LAVAers After having the linux version of labview 8.6 for a while, we decided it was time to move our applications from labview 8.2.1 to LabVIEW 8.6. most things when fine, but i got into trouble when trying to port some of the applications making use of shared libraries. There is one library in particular i have been battling with: a implementation of Role Based Access Control (RBAC) done in C++, and used in most of our front ends at work. unfortunately i can't upload the source of the libraries here, since the whole RBAC project invokes several external shared libraries and server dependencies, and won't compile without them. What is puzzeling me and led me to beleive that there is something wrong with the application builder, is that the shared library call works just fine under the full development environment, but fails once i make an executable out of it. It also works just fine as a vi and an executable in labview 7.1, 8.2, 8.2.1 but not in 8.5 and 8.6 (haven't gotten the 8.6.1 distro for linux yet). So here is the error message i get: *** glibc detected *** free (): invalid pointer 0x0832d930 *** And here is what i have been trying to do to solve this problem: -Simplifying library call and narrowing down the triggering problem to one function (class calling a pointer to another class) in the library. - tried to gather and install all the latest libraries needed by labview executables (For a modified Red hat 5 distro but with all the requirements for labview 8.6) // NOT WORKING - copied all these libraries in ./ path oh executable, to ensure that this version vas loaded (and setting LD_LIBRARY_PATH to ./) // NOT WORKING, STILL CRASHES - Installed the latest .rpm of glibc (glibc-2.9) (LabVIEW 8.6 for linux is supposedly working with 2.2.4->) //NOT WORKING SAME PROBLEM - Installed labview 8.6 trough rpm (the main version is customary installed, so i thought we had missed something in the installation process) on a different machine running Fedora Core 10 with all the latest patches (verifying that the problem does not come from the core libraries?) //NOT WORKING -Tried to use the shared library in other applications that labview (made a small c-wrapper, calling the same functions as labview) // WORKS JUST FINE:( -Trying to compile and run with labview 8.2.1 // WORKS JUST FINE -Trying to do a coredump of the executable and debugging it trough gdb // NO DEBUG INFORMATION (of any use to me) -Tried to compile the shared library on windows, and making a labview 8.6 executable calling the library there // WORKS -Trying to find memory leaks trough valgrind, but valgrind sends abort signal when calling the library trough the labview executable //OF NO HELP -Eating lots of biscuits and drinking too much coffee, realising that 4 days has passed and nothing has happened. Here is what im planning to do next: - Make another shared library calling the RBAC wrapper library, setting it up so that ( hoping) it will use the system memory space and not going trough labview, only passing the actual data in between. If this shared compiled library also crashed on labview 8.2 and on windows, i would probably be tempted to believe that the problem was caused by some bad memory handling in the c++ project (calling delete twice for the same pointer or something in that manner), but seeing that it works in labview 8.6 development environment on my linux box, it works in the windows environment, it works when compiling executables on windows and it works when compiling executables in labview 8.2.1 on linux. it has to be the application builder??!! (which is another way of saying that i really don't have any clue at this point why the lvexec fails and the development env dont). Can any of you bright minds out there shed some light on this, cos right now i don't know what to do next. any tips or linktousefulstuff is highly appreciated. Cheers X :headbang: (Unfortunately the upload thingy here prevented me from uploading the .tar.gz of my test vi)
  10. QUOTE (rolfk @ Dec 18 2008, 10:09 AM) Thanks Rolf This world of Mac is still new to me After making this post, i managed to make a small Xcode project and creating the .framework "directory" and making use of some simple callbacks. My only "problem" now seems to be how i can re-use my makefile in the mac unix environment. I guess this post is more suited in the mac/linux forums, but i'll post back what i figure out. X
  11. Dear Lava users (im sorry if this post is more a c/c++ code porting problem than a actual labview problem, but... ) I just bought myself a brand new apple Mac book Pro, 15" with OS X Leopard (10.5.X i think?) installed. After playing around and familiarizing my self with mac, OS X and its unix like environment, i decided to port some of my labview code wrappers that we use at work to interface with various things, previously compiled for both windows in visual studio 6 and 8 and linux red hat 4 and 5. (gcc 3.4.2 and gcc 4.1.2 i beleive, but not 100% sure). So i did some reading, and it seemed that on mac a ".dll" from windows and ".so" from unix/linux on OS X is called ".dylib". i installed the Xcode package, labview 8.6 for mac, and some other stuff that might come in handy. After modifying my makefiles (basically replasing -shared with -dynamiclib and recompiling with gcc in the terminal) i tried to load the new "myFile.dylib" in labview. Unfortunately labview didn't read any of my callbacks, and the dropdown list where you can choose function in the "call library node" was grayed out or empty. so not knowing what whent wrong i made a small "cout" callback like this: ############# myCout.cpp ################# #include <iostream> #define API __attribute__(( __cdecl__ )) extern "C" void API Cout(char * myCout) { std::cout << myCout << std::endl; } ################ Makefile ################# CC = g++ # Backup files from various editors ... BAKS = ,* *~ *.bak *.BAK .es1* .B* %*% .ec1 .ek1 .*~ core a.out *JNL *.lst \\\#*\\\# .nfs* *% # CFLAGS = -g -c -I/usr/include -o $@ LDFLAGS = OBJ = LDLIBS = # all: cout.so # cout.so: cout.o @-$(RM) $@ $(W)$@ $(CC) $^ $(LDLIBS) -o $@ -dynamiclib //only changed this on mac # cout.o: cout.cpp $(CC) $(CFLAGS) cout.cpp # clean: $(RM) *.o $(BAKS) making the .dylib went fine. no errors, and no warnings. but still I'm unable to load this library and use its function in LabVIEW. What am i doing wrong?? Any help would be highly appreciated. X Update: I realized after some searching that i can't use "dylib" in mac, but have to use something called ".framework" (bear with me cos this is all new to me ) so then i hoped all i had to do was replacing the "-dynamiclib" with "-framework" and name the output xxx.framwork, but now it won't compile (said something about "argument to -Xlinker missing" ) guess it's time for bed and try with some fresh eyes tomorrow.
  12. QUOTE (Adam Kemp @ Aug 18 2008, 06:48 PM) Thanks for the input. I could change the permission of the folder containing the top level vi, but i was hoping to avoid this. I guess in the end this would be the easiest sollution. I guess what i was fishing for was if there was a way to programatically re-link the VI's (something like a "mass compile light") if i chose to store them in /tmp. X
  13. Dear users of the LAVA forum I did some brief searching in this (and other) forums, but i couldn't quite find an answer to my problem: I have a non reentrant and non template vi which is not developed nor maintained by me (which basically means i would prefer not to change any of the preferences or properties of this vi). This VI is called by an application i created, where i copy the top level vi i want to call, and when the called vi is in memory, i delete the clone: This way i can launch multiple instances of the VI without making it reentrant, or saving it as a template. My problem is that this only works when i am the same user as the one who created the called vi. At once i switch to a different user, i don't have the write priveliges to this repository, so i can't do the copy. To overcome this problem i tried to save the clone in /tmp, but there labview can't find the sub-vi's belonging to the clone, so if i use this technique, i would have to relink the hiearchy every time, or clone the whole hiearchy. So my question is this: Is it possible to change user for the catalogue where i'm copying the clone, just for this one operation, programatically, without being root, or do i have to find a way to relink the hiearchy of the clone? (i guess i could specify the path of the sub-vi's of the clone in the "vi search path" and create a custom init file for this application by adding the -pref "path to init file" token in the startup script, but i was hoping there could be some other way to do this, since this variable as i understand can only be set at startup and not during runtime? which means i would have to restart the application every time a change in the structure of the called vi has changed?) Just fishing for sollutions here any input would be helpful! Cheers X
  14. Hi there great people of the LAVA forum. I was just wondering about a small thing: is t possible to run the Shared Variable Engine under Linux OS? I know that the NI pages states that the SVE can only be executed under Windows (and Pharlabs), but im puzzled by the idea of running this under Linux so that you don't have to dedicate a windows server for the SVE in a "linux only" enviroment. If it's not possible to run SVE under linux, could it then be possible through windows emulators such as Wine? or is the SVE to heavily integrated with LV that it would be hard to separate this server as a stand alone application? I just wanted to hear if anyone of you have tried to play around with the SVE under Linux enviroment ^^ Cheers X
  15. QUOTE(MTM @ Jul 3 2007, 05:20 PM) I guess there are many ways to do this, and there are probably alot more skilled LabVIEW programmers in this forum than me, but i would suggest that you time the calculations you are doing so that you have a pinpoint on how many seconds/milliseconds your calculations take in a "real" run. Then you atleast know what sampling frequency/time you can't pass. This can easilly be done by adding a flat/stacked sequense structure around your calculations, and time the calculation time, something like this: http://forums.lavag.org/index.php?act=attach&type=post&id=6299''>http://forums.lavag.org/index.php?act=attach&type=post&id=6299'>http://forums.lavag.org/index.php?act=attach&type=post&id=6299 A good thumb rule is to have atleast 2 or more measurements/samples for every half period to prevent folding of the signal. (think it was Lindquist who calculated that, but im not quite sure). So if your application at "max" speed doesn't use more time to calculate than the time it takes to take atleast 2 measurements (i would prefer more points, but this is the absolute minimum i guess), i would beleive you could do a "almost" RT tool.. If you can, i would rather divide the program into two parts: RT and "calculations". The RT task is set to a higher priority than the "calculation" part, and the data is buffered and only updated to the calculation part when the RT is "idle" or when it is suitable to do so. This way you dont loose any time from calculating or loose any data (more than the digitalization itself looses), and get the least corrupted data. The most time and even CPU consuming part i often find being the charts and visual data updated. Even a small detail such as using square dots instead of round ones (or was it the opposite way around??) in the charts, has a effect on the "calculation" time, so all the graphical handeling should be placed outside tha RT task. The only problem with this solution is if your data amount before "idle time" is really big. Since the "calculation" part of your program never handles the data synchronus with your RT task, the data array of the RT task increase for every minute. But usually this isn't any problem with todays memory or equipment. i think there are several examples on ni.com's pages and in the RT package itself (sorry, but i dont have the LabVIEW RT installed on the maschine im writing on right now, but i will check it) on how to do prioritized loops. Don't really know if this answered your question, but if you could provide some code of what your are doing, it would be easier to (atleast try to) point out some sollution (and some additional equipment / operating frequency details would help aswell) /x
  16. QUOTE(MTM @ Jul 3 2007, 03:47 AM) The sampling frequency varies from daq device to daq device. This should be awailable in your device documentation, and you can set the rate in labview acordingly. This is if the network connection between your device and your computer supports the same rate. (or if possible, store the data locally and do calculations of the measurements after acquiring the data). Where i work, we use several PXI's with DAQ cards to aquire data with frequencies up to 30Mhz, if the frequency is above this, we use FPGA since the PXI can't handle these speeds in RT. We store the data locally on the PXI, without doing any form of calculations or manipulation of the data, and when the test is done, we store the data in a main repository, where users can download and manipulate the data accordingly. So do you need to do the measurements and calculations in RT, or is it possible to buffer/save the data locally and do the calculations after? if buffering/saving is possible, you aren't that dependant on the sampling speed of your calculations on the computer, since you then calculate the outcome of the array. /x
  17. There is also a block called "integration ptbypt" (integration point by point) found under "signal prosessing", which can be used to integrate scalar momentary values. /x
  18. QUOTE(67nate @ Jun 29 2007, 09:34 PM) Hi Nate I think you are mixing bits and bytes a bit. (if you didnt just type bit instead of byte??) in any case, to clarify things: your "XOR'ed" value 2C = 3243 in HEX as you stated, which is 2 bytes, thous 16 bits (therefore the name "hex" = 16) and is written to the memory like this (Pretending this is a 16 bit system, though computers usually have 32/64 bits). (msb) 0011 0010 0100 0011 (lsb) = 3243(HEX) = 2^13 + 2^12 + 2^9 + 2^6 + 2 + 2^1 + 2^0 =12867(Decimal value) =50 67(Decimal value written in byte pairs) And another thing: Every string character represents a byte, and therfor 8 bits. so when you have 2 characters "2C" it is actually 2 bytes, 2x8=16 bits (im guessing there is no "NULL" terminator in your string since its in hex?? Gah, im starting to confuse myself...) Provided a picture and a small .vi, trying to sort things out... Hope this poor explanation vas somewhat useful.. im not good at explaining stuff... Cheers X http://forums.lavag.org/index.php?act=attach&type=post&id=6279''>http://forums.lavag.org/index.php?act=attach&type=post&id=6279'>http://forums.lavag.org/index.php?act=attach&type=post&id=6279
  19. QUOTE(MTM @ Jun 29 2007, 09:28 PM) Do you have an example of what you want to calculate, and how you tried to do this? just thinking about what discrete interpolation technique you used (euler, simpson, Bode, Cramer etc..), and if you give the discrete integral approximation enough datapoints to work on (dt VS array/simulation data size)?
  20. Hi, i just tried to make a small PI (soon to be PID) controller for integer numbers, since there are no floating point awailable in the NXT. As i understand from the documentation provided with the NXT toolkit, there is a built in PID controller in the output properties for the NXT? Have any of you tested out this controller in LabVIEW (fishing for examples here , since i often fint this more explaining than the documentation itself, and im a bit lazy) This is my own version of the PI controller for INT32, but im wondering if i should rather use U8 or something similar? This controller multiplies the proportional gain and the integration time by the factor "Scale" so that you keep some of the desmal points which you otherwise would get from floating points. Then it scales down the output and saturates it to the proper limits of the actuator, and also prevents "windup" from the integer of the error. Can some of you please give me some feedback of these blocs, and help me figure out if i did some calculating errors converting from double to integer? (Bu the way, the scale defines the value you have to multiply KP, Ti and h with, Kp is the gain (opposite to PB which is 1/KP), Ti is the integral time relative to the scale and h is the step "length") Cheers^^ X
  21. QUOTE(Jeff Plotzke @ Feb 13 2007, 03:20 AM) Hi Jeff and thanx for your reply. I'll look into the use of plain TCP/IP, but i tought the Shared Variables would preform better than TCP/IP since they use NI-PSP or "refined" UDP witch should have a smaller header than the TCP/IP. Cheers
  22. Hello all! I resently started a project where i combine LabVIEW 8.2 and the new release of labwindows CVI 8.X. the labwindows CVI package includes a library wich enables communication between the two trough shared variables (or network variables witch they are called in Labwindows). My problem is the limited transfer speed i get by using the shared variables. I did several tests where i created a send/receive loop in LabVIEW containing arrays from 0 to 10 MB size. I sendt these arrays fram labview, read them in LabWindows, and wrote the values to another variable adress, witch i again read in labVIEW. By doing this i got a almost linear increase in the transfer speed from a few kB/s to approx. 3 MB/s (24 Mb/s). and even if i tried larger arrays with 5242880 elements (40 MB), the results where always the same: 3MB/s transfer speed. I also tried to add more shared variable connections, where the idea was to see if the bandwith where limited to each variable, and thous gain more bandwith by simply adding more connections. But the result stayed the same. 3.xx MB/s, and this seem to be the transfer limit. i tried several different approaches, but the result is always the same: 3MB/s transfer rate. this test was preformed an a 100Mb/s connection, and i guess a Gigabit connection would give 30MB/s and a 10Mb/s connection would give 0.3MB/s. So my question is: if there is any way to increase the transfer speed of the shared variables, or if this is a fixed limit witch can't be delt with? All help or suggestions are appreciated oh and if it is of any use, i can upload the project, and the labwindows code, but im really only after some answers on why the shared variables seem to be limited to 3MB/s (24 Mb/s) on a 100 Mb/s (12.5 MB/s) connection.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.