Jump to content

Götz Becker

Members
  • Posts

    135
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Götz Becker

  1. An (old) way to reset a NI CAN board can be found at: ncReset.vi. It looks like it still works, currently we are using it in a RT App. With NI-CAN in fast loops we had timing problems too (1ms at first, now we are down to 5ms (just enough to satisfy an alive counter for the UUT)). It seems like that the NI-CAN boards are generally using much time on the system bus (no dma), thus are slow. Have you tried the RT Execution Trace Toolkit to see were your time is lost? It adds some major overhead while tracing, but then you'll have a chance of narrowing the timing problems down. We found (actually we believe/hope it) for our application that the CAN frame API calls are using the PAL Thread which itself does the PCI bus communications. While these are running all other bus access can cause trouble for the loop (e.g. networking). Have you tried setting your PXI Controller to Ethernet Polling Mode (probably will decrease transfer speed, but maybe enough for your app)? I have seen some really horrible traces in which the ethernet interrupt handling thread preempted high prio closed loop control loops causing timing problems. Have you tried to lower the rate in which you write in the shared variables (processing in the for loops and later writing a larger chunk in the variable)? Queues on RT should be fast enough!?! Have you tried them with fixed size and preallocation at creation? Did they ran full and then causing delay when enqueueing? Just my 5 cents.
  2. QUOTE (c.LEyen @ Nov 5 2008, 08:14 PM) A good overview over the exam topics can be found at "http://zone.ni.com/devzone/cda/tut/p/id/5892' target="_blank">How Can I Prepare for the Certified LabVIEW Architect (CLA) Exam?". The examples in the LV Adv I Manuals are good to quickly get the concepts you can use in the exam. One thing I would miss when only reading the book is the discussions about various advanced topics of LV programming which usually happen during the 3 days of the course.
  3. Hi, thanks for the hint to SQlite.net and your example. This really looks nice and I hope to get some free time to test its itegration as replacement for a current DB component. :thumbup: Götz
  4. QUOTE (ohiofudu @ Oct 7 2008, 09:36 PM) I´ll be there, but only Thursday morning to the R&D Keynote. Grüße Götz
  5. QUOTE (Neville D @ Oct 7 2008, 06:07 PM) I guess it´s a mixures of both. When I disable SCC in the project loading of the main.vi is fast, but building is still slow while processing the vi.lib stuff. Network sniffing shows that as soon I start the build traffic to the Perforce server/proxy begins. When I disable SCC in the LV options building is fast and no traffic to Perforce. I guess I´ll have to test with seperate LabVIEW.exes. Thanks for all your suggestions Greetings Götz
  6. QUOTE (Neville D @ Oct 6 2008, 06:27 PM) Perforce is currently set for all our projects and I fear that mixing SCCs would do more harm than good. Currently we are testing a Perforce Proxy installation to speed up things... it "feels" a little faster (but processing of the NI_AALBase.lvlib alone takes about 1-2 minutes). Since we code on the same project with 4 people sitting in 2 different places, I can´t just check out everything (something I normally do when working alone on a software). To get rid of the conditional disable structures would mean a major rewrite and reorganization of the codebase, another thing we can´t risk at the moment. I guess I´ll try to talk to NI about that at the VIP 2008 this week in Germany.
  7. Thx for your suggestions. my current guess is that the SCC (Perforce) checks in the background are taking very long (sometimes it looks like that some filestate caching is present). Also removal of unused library members takes time (vi.lib contents and the project has about 30 lvlibs). NevilleD: Are you using SCC? If so which? Your questions: 1: no FPGA code 2: compiling for one PXI target only, but many VIs having conditional disable symbols inside shared VIs between Windows HostApp and RT 3: enough memory present (2 GB), but most of the time LV is below 10%. Only at the end one CPU core is completely used by LV.
  8. QUOTE (KarstenDallmeyer @ Oct 1 2008, 09:40 AM) Hi, one idea I have is, that this is a http://en.wikipedia.org/wiki/Network_Time_Protocol' rel='nofollow' target="_blank">NTP like server for FieldPoint devices. But I don´t really know for sure :question: Götz
  9. Hi, I just filed a feature request for background compilation for LV RT: --- Compilation of a rtexe can take of large applications can take very long (especially with configured SCC) and is modal. This causes about 5-10 minute waits between start of a compilation and deployment of the startup exe. It would be _very_ helpful if that could be done in the background. Maybe this could be done like FPGA compiling. Copy the VI hierarchy to a temp directory and start background compiling from there, ignore SCC for this directory as well. --- How do you handle long compile times? I suppose that our times come from all those background SCC checks and analysis of unused lvlib members, but since we can´t change that we are stuck with very high downtime during program/build/test cycles. Greetings Götz
  10. Hi, I recently learned that with an installed LV RealTime it is possible to use the RT-FIFO primitives in VIs running with Windows as target system. Are there any pitfalls when using the RT-FIFOs in a Windows application? A quick exe-build of a VI with some RT-FIFO functions ran without problems on WinXP / LV 8.6. My usecase is that we have a current project developed for RT in which the FIFOs are extensivly used (mostly for lowlvl communication in various HW driver modules). We now have a new requirement for this software to be run as stripped down version (e.g. without closed loop control) on Windows targets. It would be nice if we could keep most of the code including these with RT-FIFO. My first thought was to replace all RT-FIFOs in conditional disable structures with the queue counterparts ( :thumbup: for the new lossy enqueue element!) but then all references would have to be replaced too, a road I don´t want to go down if I have a chance to avoid it. Has anyone tried something like that? greetings from a warm and sunny Munich Götz
  11. QUOTE (normandinf @ Aug 25 2008, 02:54 AM) This is one of my common usecases for the Process Explorer it shows the CPU and memory consumption of single processes (along with many other nice info).
  12. QUOTE (Neville D @ Aug 14 2008, 08:06 PM) Run code in the sense of trying concepts with RT only functions and doing tests with RT apps (no timing related tests) like file load/save or communication to a host app.
  13. Hi, is it possible to install LV RT under VMware? My thought usecase would be programming on my notebook for RT (without carrying a PXI around). I wouldn´t expect to test any time critical stuff, but running some general code should be possible. Any ideas?
  14. Hi, last week I had to hunt down a strange bug in a LV app. The usecase looks about like this. The user can create a table with setpoint values in a small editor. As fileformat for this we choose TDMS-files with a waveform in it. A host application transfers the file to a RT system (since all documentation calls the tdms_index files optional, we do not copy them). The RT app reads the wfm and uses the values for a motion control task. So far so good a simple mechanism for a simple problem... but... a strange behaviour happened. _Sometimes_ the wfm weren´t completely read out of the TDMS file. What happened is the following: The user made a file (e.g. named "file1.tdms") with a wfm-length of 100 points. Transfered it to the RT (only the tdms file get´s copied) and starts the control task. At this point the TDMS-file functions recreated the tdms_index when reading the file for the first time. Then the user decides to alter the wfm in his editor on the host, creating a new one with a wfm-length of 1000 points saved under the same filename (file1.tdms). This file gets again transfered and read in by the RT app. But now the TDMS-file read returns only a wfm with 100 points! The problem is the tdms_index file from the first read operation. It only knows about a "file1.tdms" with a wfm with 100 points. It looks like the TDMS function only use the filename to decide if and which _index should be used. No other check seem to involved!?! Our quick and dirty workaround now is to try deleting the _index files before every load. The attached VI shows the same behaviour under LV 8.5.1 Windows. Download File:post-1037-1218464169.vi Greetings Götz crossposted in NI Forums: tdms_index pitfall
  15. I just found the time downloading LV 8.6. My first impression is that I would directly start a new project with it (unfortunally I´ll have to continue my current project in 8.5.1). Running inside vmware on a Mac I couldn´t really try QD but it sounds promising. The XML-functions and Webservices look nice (I just can´t wait to play with them) but I wonder why the "httpRequestID" is a plain U32 and not a refnum?
  16. Hi, I recently had a similar problem when integrating a driver which used NI CAN. It always had problems reinitializing itself (it obviously wasn´t developed for my usecase). The ncReset.vi NI CAN KB Article helped a lot in my case (I didn´t want to debug the driver, making sure it closes all it references itself).
  17. I guess I didn´t take enough time to write me question in the first place. First my in case I doesn´t use indicators inside structures, just a control. I am just not sure if and why LV would optimize the lower solution better. It´s not that important just something I wondered about
  18. I just had a usecase in which I needed to update a nested element of a large structure. This involved 4 inplace (unbundle, array index, variant, unbundle) structures to get to the point where I needed the data from the control. The placement "felt right" in there... I just was curious about how "good" this would be.
  19. Hi all, does the "No Controls/Indicators inside structures!"-rule also applies for the inplace structure?
  20. Hi, does anyone has encountered problems with the PXIe-8130 RT Controller especially with TCP/IP?
  21. Thank you for reporting. After playing a little with join numbers I came to another behavior I wouldn´t have (naively) expected: (both results are the same) The implicit type cast of the signed inputs made total sense to me after a second thought and reminded me to always think (at least) twice about the bits and bytes underneath when using the data manipulation primitives.
  22. Hi again, I recently found another data conversion I don´t understand. Why isn´t there a coercion dot at the second join numbers? LV 8.5.1 I know that this isn´t a good style for lowlevel data manipulation, but a hint from LV about the implicit byte that is added would be nice.
  23. Crashing LabVIEW by feeding the equal primitive with a valid Refnum, was the first thing a coworker did, after I showed him this (yeah... on my computer... of course). Another interesting fact is the absence of a coercion dot. I guess that would point towards a deeply hidden form of typecast.
  24. Hi all, I am having another hard time understanding the reason behind things. Coming from 42 as major answer my brain won´t come to an easy answer why this works: Anyone around who would enlighten me, why this works and why it won´t work for other things inside the cluster like e.g. I32? VI is in 8.5.1 Download File:post-1037-1213351859.vi Greetings Götz
  25. Thx for sharing your code. Looks nice and I think I´ll play with it when I got time. Just one thing I noticed while browsing the code. The timeout "Read TCP Data.vi" is only wired to the first TCP-Read. Although in normal conditions it shouldn´t make a difference.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.