Jump to content

KoBe

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by KoBe

  1. Hi, I would like to have a stand-alone installer running on a host pc which is able to install a certain set of drivers on a PXI (like in MAX). The reason: We ship the same type of PXI together with our test benches to our costumers and when we would like to update the rtexe application, the drivers on the earlier delivered PXIs are to old, to be compatible with the newest software release. How can that be solved? I heared about images of PXI installations. Don't know if an image is a good idea? The problem is, that we need to have at least one PXI of each kind in house, to build the image for all PXIs out there. Thats not really suitable for us. If I find a bug and want to send a fixed software the the costumer or if I want to enable new software features on the existing PXIs I would prefer to be able to build an rtexe-installer, which inlcudes all needed driver versions. It should be more or less simple to use right out of the LV project, like it can be done with the PC applications installer (including Run-Time Engine and additional selected files like dynamic loaded VIs). Has anyone an idea or some suggestions on how to install programmatically drivers on a PXi from remote? In general i have no permission to enter in the costumers network from our site. They also do not have there own develoment systems, just run-time-engine. I'm using LV2012 + RT on Windows 7 and PXI RT8100 Thanks in advance. KoBe
  2. Acutally I'm using separation of compiled code, I'll give it a try an delete my object cache before restarting my programm. Where would you use the "Always copy"?
  3. Hi, I have a strange problem in one of my variables. I use that variable to store data which are displayed on a Wfm-Graph. One VI inserts new data into the variable and writes the changed value to the variable. Everything works fine most of the time, but sometimes certian values change their value to 0. My target is a Win-7 PC The data type is: Chart_XY_Signals (1-D array of) XY_Signal (cluster of 2 elements) Name (string) XY_Data (cluster of 2 elements) X (1-D array of) X-Value (SGL) Y (1-D array of) Y-Value (SGL) A new value-pair consists of a monotony increasing X-Value ( 1,2,3..........) and a data value for Y-Value. In additional information is used to state the signal, in which the data has to be stored (index of Chart_XY_Signals). I use following steps in a loop, which gets new values from a queue: 1) Read Variable - loop for all new values comming in 2) Index Array Element of Charts_XY_Signals 3) Unbundle X and Y of XY_Data of XY_Signal 4) Append X-Value and Y-Value to the arrays X and Y 5) Replace new X and Y with Bundle by Name in XY_Signal 6) Replace new XY Signal in the Chart_XY_Signals array - end of loop 7) Write Variable The error is following: sometimes the data structure gets corrupted: Assume 4000 value-pairs for each signal (about 40) are stored in that structure, it happens that some old Y-Value or X-Values are unintentionally changed to 0. In my WfmGraph this makes horizontal lines to X=0 or vertical lines to Y=0 which looks ugly and must be resolved before delivery of the project. I checked my code and debugged it and came to the conclusion, that the error can not be a programming error, because: The first time, a 0 value is detected after point 3) !!!! That means somewhere in Read / Index Array and Unbunble the data is corrupted. Writing new data at point 7) makes the old value lost forever. The other possibility: The error occurs somewhere in Bundle / Replace Array element an Write. I thought this parts would never fail an therefore i never cross-checked the whole bundled data before Write. The variable can't be the problem. I used: 1) Global Variable 2) Data Value Reference 3) Functional Global Variable (FGV) As index/replace and bundle/unbundle is used both, nested in place structures and normal style. Can't understand where the problem comes from. Has anyone experienced similar problems or has an idea why I'm running into that s***? KoBe
  4. Hi, i have a big problem: im my project I need to evaluate the difference between the RT timestamp and the Read X-NET CAN Frame Timestamp to sort my incoming data. System: PXIe-8100 RT Controller; PXIe-1071 Chassis; PXI-8512 CAN Card Unfortunately it came out, that these timestamps are not synchronized and have a different clock speed. I after some time it seems like I would receive the CAN messages before I trigger the external Hardware to send me a CAN frame. I asked the support, if connecting the CAN Card Mastertimestamp with Clk-10 of my PXI would synchronize both clocks, but i was told, that the RT timestamp is software and can not be synchronized with any signal off the PXI backplane. I hope really that this i not all of the truth, maybe there is some kind of workaround to synchronize the RT Timestamp with a fixed PXI clock. If there would be a fixed offset between RT Timestamp and the X-NET Timestamp I can correct it by software, but the varying offset at the moment is a serious problem for me. Hopefully someone has I great idea!
  5. Hi, I'm using source control for a few months and since some weeks I'm not the only developer. Earlier i had all project files and library files locked by me, to avoid those "recompile save requests". But now I have locked only a portion of the code and it happens more often to me and my colleague, to loose some developed code, due to the "Don't save", because sometimes we are thinking to don't save only the recompiled code. I read many threads on "separete compiled code from source" and I was wondering, if LV2011 is running stable with that. My NI support is quite busy at the moment to give me a clear answer about that. Could anyone please answer my question "Should I use this feature in LV2011?" just with "Of course" without a following "but ...." ?? Our libraries are developed from LV2009 on, we did not use any earlier version. Has anyone experience with LV-RealTime and RealTime targets and separated source code? How does that work? Any issues? From the programming style point of view we aren't usingh neither object oriented programming, neither dynamically called VI's. Just plain LV VI-SubVI and Control typedefs. Recompile often happen to us, just opening a VI in our Project under HOST and RT target, which use the same SubVI placed in our user.lib Just opening the calling VI of our user.lib VI once for the HOST and the another VI calling the same user.lib VI on the RT, causes a recompile of the user.lib VI. That is really a problem, because in some cases I'll be asked to save 50 or more VI's, just because of opening the MainRT after closing the MainHOST due to this recompile issue. Any ideas? Any advice? Is the only way to resolve that to going to separate code from source?
  6. This short instruction answers already all my questions. Thank you! Your picture made me understand the short instruction from asbo. Thanks you!
  7. Hi people, i have quite a big project and therefore I would like to make the block diagram of my MainVI more readable. There are several parallel task packed in SubVI's, which looks really pretty. The last challenge for me is the Event Structure of the MainVI. With this Event Structure I react on user input on the MainVI Controls and Indicators. At the moment I haven't implemented any User Events. Now I'm searching for a possibility to move the Event Structure of the MainVI in a SubVI to make it also more compact. Is the a possibility to do that? In fact this would mean, a SubVI must be able to handle Events fired by it's MainVI. Can anyone help me please? Thanks in advance!
  8. Hi, I've written a program, which saves some data in TDMS files. One part of data is written in interleaved mode. The other data is written in decimated mode to another file. With Excel I can import the decimated data file without any problems but the interleaved data file crashed Excel. Although the file has only around 50kB, according to the Windows Task Manager, Excel reserves in a second more the 1 GByte of RAM and than it crashes. Excel 2007 with TDM Importer 3.2.0.18 Labview Professional Development 10.0 32-bit It looks to me as Excels couldn't find the end of the file. The Labview TDMS Viewer hasn't any problem with the file and shows the expected data content. Excel throws this Error Window: TDM Importer: USI encountered an exception: Unhandled Exception in Initialize I tried also to delete the *.tdms_index file and defragmentation before importing the TDMS in Excel without success. At the moment I'm installing TDM Importer 3.2.2.0. Maybe that works better. If anyone had the same or different problem, please let me know your solution. You find attached the interleaved TDMS file and the Error log by Excel.
  9. Hi, thanks a lot for your quick replies. I will check the point with the conditional probes. The Pause button was definitely not the problem. In my case it has also nothing to do with the IPE structure. I'll inform you, when I have news.
  10. Hi people, found a strange behavior in some of my VI's during debug: Sometimes it happens to me, that a program stops always at the same point of a VI or SubVI, even though there is no breakpoint. It behaves like a breakpoint, but there is no breakpoint. I discovered, that it happens more often, when I have a Probe Watch Window open, but the position of the signal probe may be also in a different location in the VI block diagram, and it still stops at the same nonexisting "breakpoint". When I close the Probe Watch Window, the programm (e.g. a loop) does not stop anymore at the mentioned point. My question: How can I resolve this strange problem? Is this a feature of Labview and I only don't know how to handle it? I'm using LV 10.0 32-bit Professional Development Environment.
  11. Hi people, I'm working with Labview 2010. A part of my project is the CAN communication between PC and some custom hardware. I'm using CANUSB with the delivered DLL Library in Labview. For test purposes only, I connect a second CANUSB device (running on the same PC) to the bus. With the WGSoft CAN Monitor Pro I should be able to define message triggers to which it has to answer: Example: Sending with Labview CANUSB1: ID:0x385 Message: 0x01 Length: 1 Message Trigger in WGSoft ID: 0x385 Message 0x01 (00 00 00 00 00 00 00) Mask: 0x 00 00 00 00 00 00 00 00 (????) Length: 1 CANUSB2 should then respond on the message with ID: 0x065 Message 0x 08 98 09 60 0A 28 00 (00) Length: 7 Problem description: CANUSB2 responds only the first time to a request by CANUSB1 WGSoft Monitor shows first: ID: 0x387 EXT: 0 RTR: 0 DLC: 1 MSG: 0x 01 00 00 00 00 00 00 00 ID: 0x065 EXT: 0 RTR: 0 DLC: 7 MSG: 0x 08 98 09 60 0A 28 00 00 second time ID: 0x387 EXT: 0 RTR: 0 DLC: 1 MSG: 0x 01 98 00 00 00 00 00 00 Strange that the Monitor Program shows 0x 01 98 ..., seem like the second byte from CANUSB1 gets overlayed with the second byte of CANUSB2. Has anyone an idea how to fix this problem? I get often also arbitration error, a red constant led on the CANUSB appears and I don't know how to clear this error by software and i don't know how to prevent it. Does anyone know if i should set the RTR in my request to 0x 01, even though the response has a different ID? Each second time I connect to CANUSB via Labview my Labview gets stuck somewhere in the open Dll call. I leave the szID empty, because it doesn't work if I would like to search for a certain CANUSB device with it's serial number? Anyone an idea why Labview gets stuck so often with this CANUSB driver?? I know, lots of text, but if you even have only answer to one of all these questions I would be really happy to know it an to share it with LAVAG. Thanks
  12. Thanks a lot for your VI. I allready programmed something else: I'm opening the Queue in the loop, preview an element and then close it again with force destroy=false. Didn't have yet the possibility to test the code. What happens if I close a Queue reference with force destry=false? Doesn't it mean that at least one reference remains open somewhere? A final destruction is done with force destry=true ??
  13. At the moment I'm "offline" / have no possibility to access the cRIO to try your suggestions. Nevertheless I thank you very much! In the meantime I found exactly what I was searching for: an official NI statement about memory leakage with Queues. When I'm online again I'll try to fix that leakage like described below and the I'll let you know if it works. http://zone.ni.com/reference/en-XX/help/371361E-01/glang/create_queue/ Obtain Queue Details Use named queues to pass data between two sections of a block diagram or between two VIs in the same application instance. If you do not wire name, the function creates a new, unnamed queue reference. If you wire name, the function searches for an existing queue with the same name and returns a new reference to the existing queue. If a queue with the same name does not already exist and create if not found? is TRUE, the function creates a new, named queue reference. If you use the Obtain Queue function to return a reference to a named queue inside a loop, LabVIEW creates a new reference to the named queue each time the loop iterates. If you use Obtain Queue in a tight loop, LabVIEW slowly increases how much memory it uses because each new reference uses an additional four bytes. These bytes are released automatically when the VI stops running. However, in a long-running application it may appear as if LabVIEW is leaking memory since the memory usage keeps increasing. To prevent this unintended memory allocation, use the Release Queue function in the loop to release the queue reference for each iteration. This function might return error codes 1, 2, 1094, 1100, 1491, or 1548.
  14. Please use contact information in the attachment
  15. Thanks Tim! You're right, I'll insert Close FIFO. If the program would work without problems, I would reach that point after 2 years of running the loop with Meetbox FIFO which means ending the data logging period. And in that case it doesn't mather if the program crashes. But for correctness i'll insert everything. Nevertheless I don't know why my code would run infinitely in debug mode and on the other hand causes RAM overflow running with a build rtexe.....
  16. True true, now CPU load is reduce, but memory issue is still not solved...
  17. Maybe I did run also into that known issue: http://zone.ni.com/devzone/cda/tut/p/id/11014#86-reademptyfifo Reading Empty Target to Host DMA FIFO with Timeout Set to Zero Gradually Starves CPU in built LabVIEW RT executables on cRIO targets In built LabVIEW RT applications on cRIO, if a Target to Host DMA FIFO read executes with a timeout of zero and the FIFO is empty a processor leak occurs that increases the CPU usage on the controller. Workaround— Read zero elements to find elements remaining, instead of using a zero timeout Date Added—01/15/2009 I have to check that now....
  18. The whole library behind the project is huge... would prefer to not upload it. Now I use pre-allocated memory and I replace elements in the existing array instead of building a new array. That work also fine in debug mode as described before, but it does not work if i build a rtexe
  19. It's getting weird!!!! VI in debug mode: used memory remains constant VI in rtexe mode: used memory constantly increases till crash I think some NI Application Engineer is needed who knows what could be the difference in memory allocation or usage depending on debug mode or rtexe mode...
  20. Hi people, it's me again. Could it be possible that "Build Array" leaves some overhead in the RAM behind, even the memory should be deallocated by the RTOS? How can I force deallocation in a VI which I never leave (main VI). Can it help to make SubVI's around each "Build Array" with a "Deallocate Memory" inside? I know that it is better to preallocate all the memory before and then insert the new values in the existing array. That works fine for fixed size data like the single array I'm using. But what would happen if I a would preallocate an array of 6 error clusters? The cluster can't be of fixed size because of the string? Or it's better to do that also for the error clusters to be sure no new memory is allocated at least in the case that their is no error an all error strings remain "" which mains of constant size most of the time?? ..... ..... ..... I tried to use only preallocated arrays and it seems to work... I have to run again some hours, days, weeks, months of test, then I will let you know :-) Bye Kobe
  21. Hi people, I'm using: 1x cRIO-9073 with 1x NI-9203 and 1x NI-9205 1x WSN-9791 Gateway with 3x WSN-3202 1x MBus Modbus converter connected to RS232 of the cRIO Labview 8.6.1 + FPGA + Real Time NI-RIO-3.4.0 without Scan Engine My system runs as a datalogger. Ac mains are measured with 10KHz and the mean of 2 seconds of current and voltage are stored. All other devices are sampled with 2 seconds. A further loop stores that data in an array and after 1 minute it makes the mean for each 10 seconds (5 samples) and stores the result in a TDMS file. The problem is, that the application starts with memory occupation of about 78,6% (View in System Manager) and with each second it increseas slightly by 0,001% or even less. BUT that's enough that after one or two hours the cRIO runs out of memory and gets stuck. The problem is, that this device is in a remote location and should run for 2 years and I can not go their every 2 hours to reset it manually because it does not respond in remote. In the screenshot you see the main VI of the cRIO. First of all I have some init stuff, a 1 element cluster Queue which is use like a notifier (single writer=>lossy enqueue and multiple readers=>preview) and a 10 element string Queue which drives the overall init and control finite state machine. This state machine, after init toggles between Idle and Remote with a periode of 2 seconds to read the values of functional global variables which I'm using to control the system in remote. (not really needed much, because it's a "stupid" datalogger application which does not need any user interaction appart from freeing disk space by downloading periodically the tdms data). The second big loop in the upper part includes 2 timed loops and one while loop to make acquisition and data storage. 1) Timed loop with 200ms periode reads FPGA DMA and elaborates data. If a package of data (2048) values are in memory, a 50Hz single tone frequency will be search to measure AC mains frequency, RMS voltage/ current. This elaborated data is passed into a functional global variable (FGV). 2) Timed loop with variable periode (in this case 2 seconds) reads WSN data, AC data from FGV, Modbus, analog inputs of the cRIO and inserts the data array into a RT-FIFO 3) Reads RT-FIFO, waits 500ms to free CPU if no further element is available. After a variable time (60s) all collected data will be processed and is then stored in a TDMS file. The TDMS reference is opened once and so long the system doesn't loose supply it is stored as FGV. Only after restart the reference will be reopened again. ************** And somewhere in this code the cRIO is eating piece by piece my free memory :-(. Each SubVI contains allready "Deallocate Memory" but it didn't improve the system. I'm really desperate. Can someone see from the screenshot if I'm using a programming structure or method where some issues with memory are known? I have no idea.... It happens also with an other project, where I just read 5 analog inputs with a sampling periode of 2s. There after 20 days the cRIO is stuck again, means memory is also full. Could the problem also come from VxWorks RTOS behind Labview? ************** Don't know where the problem comes from and tested allready so many things.... the problem is that is takes hours or day to reproduce the error and that really a problem, because I can never be sure that the system is not running out of RAM memory after some weeks? Thanks for all suggestions you can give me, I'm really looking forward to any single reply, because otherwise my whole pipeline full of projects won't be successful. Ciao Kobe
  22. Hi people, I'm using my 9205 together with a cRIO-9073 and Labview 8.6.1 RT+FPGA Module. I have two similar projects, both should sample with 10KHz a different number of RSE inputs reading calibrated values with the FPGA interface (not scan engine). One project needs 10 inputs, the other one needs 14 inputs. With 10 inputs everything works fine. Supposing an overall sampling rate of 250KHz I can sample 10 channels with 10KHz each (250KHz/10=25KHz >= 10KHz). Also 125KHz would be enough (125KHz/10=12,5KHz >= 10KHz). Unfortunately the maximum sampling rate with 14 inputs seems to be around 8,9 KHz only!!! That would mean the NI-9205 is sampling with only 125KHz instead of 250KHz also in RSE mode. (125KHz/14=8,9KHz < 10KHz!!!!!) Why that? How can I reach the 250KHz for all 32 channels described as maximum sampling rate on the official data sheet of the NI-9205? Can it be that the NI-9205 uses 2 16-channel ADCs with each 125KHz? That would mean I have to measure at maximum 12 channels with a single multipled ADC to maintain a sampling rate >=10KHz per channel. Any information about that? If anyone could give me further information on the maximum sampling rate in different operating modes (RSE, NRSE, Differential) with a different number of channels I would be very happy and grateful. Thanks Bye KoBe
  23. Hi People, I'm using 8.6.1 When I build my cRIO Main Application I get the following error: Error copying files. Source: C:\Program Files\National Instruments\LabVIEW 8.6\vi.lib\real-time\rtutility.llb\FPC pad/strip string to size.vi Destination: C:\LabView_Builds\Main\c\ni-rt\startup\data\FPC pad/strip string to size.viInvoke Node in AB_Source_VI.lvclass:Copy_SourceItem.vi->AB_Build.lvclass:Copy_Files.vi->AB_Application.lvclass:Copy_Files.vi->AB_Build.lvclass:Build.vi->AB_RTEXE.lvclass:Build.vi->AB_Build.lvclass:Build_from_Wizard.vi->AB_UI_FRAMEWORK.vi->AB_Item_OnDoProperties.vi->AB_Item_OnDoProperties.vi.ProxyCaller <APPEND> Method Name: <b>Set Path</b> I found no bugfix until now. Therefore I would like to remove the / from the filename of "rtutility.llb\FPC pad/strip string to size.vi" to make the filename compatible. Problem: "rtutility.llb" and "NI_Real-Time Libraries.lvlib" can not be changed, they're password locked by NI. --> tried to replace "pad/strip" with "pad strip" with Nodepad in the "rtutility.llb" and also in "NI_Real-Time Libraries.lvlib" Problem: couldn't open "NI_Real-Time Libraries.lvlib" anymore (maybe corrupted checksum due to Notepad changes) and the changed "FPC pad strip string to size.vi" is not executable, because it claims to ba part of a library ("NI_Real-Time Libraries.lvlib") , but no library claims to own "FPC pad/strip string to size.vi" So I have no Idea, PLEASE can anyone help me? If there is an existing bugfix that I only didn't find, please send me a link. Thanks!!!
  24. That work, I've a VI which takes a Cluster (input is Variant) and searches for it's contained datatype in the flattened data. Ouput is then an array of clusters. If I would be able to typecast / convert or extract the data cluster (attributes) of my class into a variant (containing this cluster) i could use my VI and .... finished. But I don't know how to get the whole attribute cluster. If I look at the class as flattend data I can't find the type descriptor of the cluster, just class name and something else containing the class name. In the data string there seems to be all data, but I can not extract that in general if I don't have a meaningful type descriptor. Let's try it the other way round: how could I get the the needed type descriptor out of the classname? Would that be possible maybe? Bye
  25. Hi people, I would like to know, if it is possible to read all attributes of a Labview class by getting an Array of Variants? I would like to have such a Read Method because I will use all attributes of my class as the description of a TDMS-File. The attributes Type Descriptor Name will be the Property Name in my TDMS and the Value will be the TDMS Property Value. Instead of calling all Read Methods for each single attribute in my class I would like to have a Generic VI with such properties: -Input is a Variant, so I can wire any Class to that input. -Output is a Array of Variants containing all attributes of the input class. This Generic VI will then be a SubVI of the "Read_All" Method of each class to be able to access private attributes. Has anyone an idea how to "typecast" or "Variant To Data" my Input Variant to a Generic Labview Class Reference, where I can access the attributes and their values to put them in my Array of Variants? Bye Corrected Version of my VI, I'm sorry for that.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.