Jump to content

Richard_Jennings

Members
  • Posts

    21
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Richard_Jennings

  1. Thanks for the great tool! It's sooo much better than mangling JSON by hand. I have an issue with the tool not handling variant data correctly. I have a cluster with a string and a variant. When flattened to JSON the variant name is an empty string. {"SN":"2016D8A1D86C","":{"ChaSt":"Ready","LocRemCtl":"Manual"}}. The variant attributes are correctly included in an embedded JSON string, however converting back to a cluster omits the attributes. Thanks, Richard
  2. I like Project Locker It's free to start, and I've never had an issue.
  3. QUOTE (Jim Kring @ Apr 1 2008, 09:37 AM) Hi Jim, We touch on threading and execution priorities in "LabVIEW Graphical Programming". "The one exception to the time-slicing feature of each execution system is when a VI is set to run at subroutine priority. A subroutine VI will always run to completion within its execution system. A thread running a subroutine VI can still be preempted by the RTOS but no other VI running in the same execution system as a subroutine VI will be able to run until the subroutine VI is finished." Johnson, Gary W. "LabVIEW Graphical Programming" (McGrawHill, New York, 2006) 391. I use it occasionally to optimize a crucial VI, or protect a critical resource, but as Michael pointed out, usually it's better to use LabVIEW's defaults. Repeatedly calling a subroutine VI can negatively impact execution. just my $0.02, Richard
  4. QUOTE(DaveKielpinski @ Dec 25 2007, 08:58 PM) Hi Dave, #2 - unless you are trying to make your application impossible for anyone else to debug Dataflow and the ease of debugging a LabVIEW app by another programmer would be thrown out the window. XControls provide a great way to enhance a UI without unnecessarily complicating the block diagram, but hiding an elaborate messaging scheme would - I think - be the wrong way to use them. Richard
  5. Hi Eric, I'd tried that this weekend, but some of the events used in the GUI were not available in 7.0. Richard
  6. File Name: Virtual Logic AnalyzerFile Submitter: jenningsr@earthlink.net File Submitted: 4 Jan 2007 File Category: LabVIEW Development Environment The Virtual Logic Analyzer is a development tool for monitoring VI execution. It is especially useful for optimizing the performance of multi-threaded, parallel applications. Monitoring Data and Timing information over an entire application can provide unexpected insight into performance bottlenecks. The VLA Probes use the OOP model Stepan Riha of NI introduced at NIWeek 97 and the Virtual Logic Analyzer concept is based on the presentation "Monitoring the Control and Timing of VIs" by Dana Redington at NIWeek95. There are two parts to the Virtual Logic Analyzer: Probes, and the GUI interface. Probes are meant to be as efficient as possible, but do not place them in a MHz type loop. Instead place them in strategic locations where you can monitor program flow. Each VLA Probe is a reentrant vi that monitors your program's execution by timestamping data during program execution. Simply place this vi on your block diagram and connect the required Tag and Data inputs. Remember that dataflow governs LabVIEW execution. Be sure to give each Probe a unique Tag. Probes can be on a local or remote machine. Because Probes are OFF by default they can be left in an application and individually switched on or off later with the GUI application. To view and interact with the probes run the Virtual Logic Analyzer application. In the Info tab of the VLA application is a multi-column listbox with a live display of active probes. Double-click on a probe to turn it on in the application you are monitoring. You can turn on as many or as few probes as you need. Live data is displayed in the graph on the Data tab. Note that data is only displayed for probes that are turned on. Ideally the VLA GUI interface should be run on a separate computer from the application being monitored. This eliminates the GUI overhead from impacting application performance. The VLA interface uses VI Server calls to interface to the Probe Registry. Click here to download this file
  7. Hmmm... at $295.00 for the online training course I think you're better off buying Jeffrey & Jim's book AND mine and Gary's book. I'm a little biased towards ours :thumbup: LabVIEW Graphical Programming, 4th Edition McGrawHill, 2006 We put a lot of work into making sure it covered all the material on the CLAD and the CLD. There are even some practice exams for the CLD. I wouldn't use the CLD as a study guide for writing good LabVIEW applications. In general, I found the application requirements written in a way that almost forces you to use antiquated architectures - polling, etc... i2DX has some good advice above. Just remember it is a timed test Richard
  8. Try using the find and replace feature! Edit >> Find and Replace You can search for functions, VIs, Globals, typedefs, etc... Richard
  9. Lot of parallel loops there :-) Remember a while loop always executes at least once. I believe your loops are executing twice because you are reading the "stop generation" local in the outer loop without any data dependency on the inner loop. Solution - pass the value from the stop generation local in the inner loop out to the outer loop and use it to stop the outer loop. In your example the outer loop reads the value of the local (F = keep going) and starts the inner loop (based on your case selector). When the inner loop terminates on stop generation, the outer loop executes one more time. As for why your application won't stop - chances are the queue is waiting for data that will never come. Kill the queue using one of the data writer loops instead. I hope this is understandable. Richard
  10. I think the features you mention get zero mention because they have limited benefit As others have pointed out - you have to have the professional version for XControls to be useful. Arrays as UI elements are generally a bad idea, so the fact that they have scrollbars is not really a bonus. Afterprobes - we'll see. Matrix data type - Exciting to some I guess. File I/O primitives - A lot of file utility VIs were thrown out with 8.0 and fundamental changes made to the primitives that I think are flawed. 1) the default mode of the read primitive in text mode is to terminate the read at an EOL. The ability to chunk through a file line by line is great, but I don't think it should be the default. 2) You cannot set the read mode via a control or constant. The only way to change the read mode is through a pop-up. The only way to determine the read mode setting is through the same pop-up - there are no visual cues. So - I don't think the file I/O primitives have taken a step forward. Variants - they needed a performance enhancement! Do you have a benchmark for how much slower they are over data passed by wire in their current implementation? My favorite feature of LV 8 - the ability to work on multiple targets. My least favorite features - the project and the arbitrary reorganization of the palettes. Each time the palettes are changed, my productivity decreases because things are no longer where they should be. -just my two bits. Richard
  11. We have a couple of new LabVIEW authors - Jim Kring, Peter Blume. Any others? <shameless plug> Gary and I just released the 4th edition of "LabVIEW Graphical Programming" </shameless plug> McGrawHill is making plans for me to be on the expo floor signing books each night. What other authors will be there signing their wares? Richard
  12. oooh! There we go - instead of presentations we can have a WWE style grudge match! It's unfortunate they're at the same time. I would have liked to attend your presentation Norm. Richard
  13. See you there Jim! BTW how did JKI become the "poster boys" for NIWeek 06? That's awesome. Richard
  14. Sorry if this is off-topic ... My presentation is Thursday at 10:30, Room 15. "LabVIEW Embedded Application Programming" If you're thinking about building an embedded LabVIEW thingee then come listen. Topics include programming for LV E, a cost/performance comparison of tools and platforms, and the steps required to take your idea from concept to reality. I hope to see you there, Richard www.jembedded.com
  15. We are using LabVIEW Embedded. As Rolf mentioned, it can take weeks to months to port to a new processor. However LVE ships with example targets for UNIX and Windows console (no GUI) applications. Maybe this is all you need? Can you tell us more about your application? Richard Jembedded.com
  16. Hi, We're using LV E to target an ARM7 (Atmel EB40A) with 256kB RAM and 2 MB flash. The Colibri certainly has enough resources and horsepower to run LV Embedded. Since it is already running WinCE you might see if you can target with PDA for CE as previously mentioned. NI is usually open to letting you try out SW for a project. If PDA cannot target your platform then you'll need to port LV Embedded to it. Contact me off list if you want more information. richard at jembedded dot com Richard
  17. Hi Bjorn, I wrote a simple VI showing how you MIGHT be able to do this in LabVIEW. A couple of points: 1) although the block diagram can run at kilohertz rates, front panel updates are typically at less than 100 Hz. The refresh rate on my LCD monitor is only 75 Hz and televisions refresh at 25 - 30 Hz. 2) LabVIEW splits the block diagram code from the user interface code to keep the block diagram from waiting on the UI. You can force the block diagram and the UI to synchronize by selecting Advanced>>Synchronous Display from a control's (or indicator's) pop-up menu. What ever loop this control or indicator is in will run in sync with the UI. Other parallel loops will continue to run at full speed. The example VI takes advantage of synchronous display to timestamp the flash of a data point on an XY graph, and uses the built-in timestamp provided by the event structure to calculate the difference between the flash and any keypress. Hope this helps get you started, RichardDownload File:post-724-1138901208.vi
  18. Hi, Is your event case set to "Lock front panel until event case for this event completes"? Events need to be handled by the event structure as they occurr. It's not a good idea to put an event structue inside a case structure where it might not be able to fire. It sounds like the event is not handled immediately, causing the front panel to lock until the event is handled and complete. Richard
  19. Hi Matt, I'll repost our earlier email Q&A: On Monday, September 27, 2004, at 10:05 AM, Hill, Matt wrote: I'm acquiring PCM encoded streams and decoding them and sending the results to an RT target in real time. (The decoded data must be sent every 2.5 ms.) I'm trying to implement this routine as many times as I can on a single board. Basically I'm running out of space. I'm trying to determine what coding techniques are the most efficient. And there doesn't seem to be any way to isolate a bit of code and benchmark how much of the FPGA it occupies, so it's a difficult process. >>Are you using single-cycle loops? They save a lot of space on the FPGA. Also watch out for the extra logic added by DIO arbitration. Also, I'm trying to achieve high speed in my processing. In this area I can isolate code and test it, but benchmarking every available coding option is a bit tedious. It would be nice to have some general guidelines. >>I use: single-cycle loops, pipelining, a digital line as a flag to verify and benchmark execution on a scope, avoid deeply nested logic, pipeline execution whenever possible. Here are some examples of the questions I have: 1. It seems that the fastest way to send data from the FPGA to the RT target is to use a single IRQ on the FPGA, and then to post data word by word in a synchronous indicator. A synchronous indicator only updates once it has been read by the RT target. This technique is undocumented by NI, but one of their FPGA people showed it to me. I have found this to yield the highest transfer rates. It would be nice to understand why this is faster than sending the data in an array. >>Under LV 7.0 the max data rate I could achieve was 4.2 Mbits/sec. I stored data into on-chip memory in one loop and read it out in another. The read-out loop would place the data into an array and flag an interrupt. I haven't tried this with the synchronous indicator - just found out about it Thursday. Interrupt driven transfers worked, but even on our monster machine, the CPU was bogged down handling interrupts.Forget about placing a graph on the front panel to display the data. My solution was to complain loudly to NI and route the byte-aligned data in parallel out the external connector to a 6534 DIO card. It's a horrible kludge but it gets the data in the PC. 2. I have problems where my FIFOs become unlinked with the program and return random data, but there is no error. This I believe this unlinking is a bug that occurs when you rename an FPGA vi or copy it. Has anyone else experienced this? >> I haven't. 3. When storing data temporarily, I use FIFOs. These can be created as flip-flops, look up table or block memory. I would like to know the trade off between speed and FPGA space when comparing flip-flops to block memory. >> Don't know 4. What is the penalty incurred for creating a subvi within FPGA code, both is terms of time and space? >>I was told none. Everything is treated as one monilithic program by the compiler. 5. How does saving data in a shift register compare with saving it in a fifo? >>Shift registers are mapped directly into logic gates. Not sure what overhead is added by fifos. 6. If I have a Boolean constant linked to 5 data sinks, should I separate this into five constants, thinking it will require less signal routing in the fpga, or link all sinks to the same constant? >>Don't know, but I think this would be something the compiler would optimize. 7. If I send a U8 source into a U32 sink, I get the grey dot indicating a data type mismatch and Labview changes the data to the correct type for me. Is it more efficient to let labview perform the conversion, or should I insert a conversion node myself. >>Don't know, but I would guess it's the same 8. To what lengths should I go to avoid using case structures? >>I use them wherever I need them. I avoid deeply nested case structures, and try to minimize the number of cases. In the next issue of LabVIEW Technical Resources is a short article on LabVIEW FPGA that you might find interesting. The FPGA code has its own rules and there seems to be very little documentation available to explain the optimal way of approaching common tasks. Any little tips and tricks that anyone has discovered would be very appreciated. >>What are yours? Richard
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.