Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 10/03/2014 in all areas

  1. The other day I was having trouble tracking down a hanging Actor. Debug Actor Framework let me shut them all down but it wouldn't tell me what Actor was still left hanging. That took a little more digging. I realized this was a problem I was running into a lot, so I decided to add some simple tracking code to our department's Parent Actor Template. Maybe this'll be useful to someone else, or give someone else some ideas. It's just something I whipped together quickly so it isn't thoroughly vetted or anything. Basically, it's a dialog that tells you (1) what Actors you've launched, (2) what Actors you've shut down, and (3) what Actors are still running. The tracker dialog is contained in a single VI, which is called in Pre-Launch Init and Stop Core overrides. The call is inside a conditional disable structure, with the condition "ACTORDEBUG==True". To enable the tracker, you set that conditional token in your Project Properties. I tried to keep the code simple and straightforward, it looks like this: The call to "Get LVOOP Name" is slow, but performance isn't a big concern for me as I only spin up a couple dozen Actors at once. The lvclass is attached, for 2012 and 2013. To use the tracker, change your actor inheritance to inherit from the Actor with Tracker class. Try it out and let me know what you think. The code depends on the OpenG toolkits. And I imagine a lot of other people have created their own versions of this and other handy debugging actor tools. I'd love to see what others in the community have come up with. Mike
    1 point
  2. I think you are after an analogue frame grabber rather than a trans-coder.. Something more akin to the boxed version of the VRmagic AVC-2 (I've never used it, but give it as an example)
    1 point
  3. The better solution would be to handle the value changes of those controls in the event structure.
    1 point
  4. Think data flow. You enter your loop it reads the value of boolean as false, and stop as false and then sits and waits forever for a menu selection to occur. Turn on highlight execution and you'll see that there is no data flow. So changing the value of controls is fine, but you are never actually reading those values. An easy solution which avoids the whole "why are we doing this?" question is to just set a timeout on that event structure of something like 100ms.
    1 point
  5. There is the "Data Logging" example which demonstrates this exactly in the SQLite API for LabVIEW. The issue would be whether you could log continuously at >200Hz - maybe with the right hardware and a bit of buffering.
    1 point
  6. Just to expand.......... TDMs is a flat-file database. Basically a data table with a look-up table (by name, index etc) with no relationship between entries .MySQL, Postgres, SQLite et. al. are relational databases. A simple way to decide which is preferable for your requirements is to think about what questions you want to ask of the DB. : If you just need to look up data based on a single criteria, e.g. channel names. Then TDMS.. If you need to ask "open" questions such as "How many", "What has" or "When did". Then relational database.
    1 point
  7. When talking about arrays, it is important to distinguish between copy(noun) and copy(verb). Copy(noun) refers to a memory buffer containing data. Copy(verb) refers to the act of reading memory from one location and writing those values to another location. If you are running out of memory, then you need to focus on the number of buffers allocated. If you want code to run fast, then your main focus should be the number of times you read from one and write to another. In many cases having more buffers means more read/write operations so reducing buffers tends to improve speed but the relationship is indirect. My discussion below refers to the operation of reading from one location and writing to another. LabVIEW's memory manager handles resizes specifically. This means it can try to expand an allocation at its current location before resorting to allocating a new buffer. This also means that in the cases where a new buffer is required, it is the memory manager that copies the existing data to the new location and disposes the old buffer. So from an allocation standpoint, it doesn't matter if a new element is being added to the beginning, middle or end of an array. The chance of it causing a copy of every existing element is the same. After the allocation is done, then we can actually have enough space to add the new element. This is where the location matters. If you're adding to the beginning, we will copy every existing element to move it down. If you're adding to the end, we just have to set the new element. Going back to the original build array scenario. This means that prepending an element with build array will copy the existing elements at least once and commonly twice. Appending an element with build array will either not copy or copy once. That makes appending always one less copy and that qualifies as "much more efficient" to me. When LabVIEW shrinks an array, we do things in the opposite order but the same principles apply. Since we won't have enough room for all the data after resizing, we must move the data we want to the front before resizing. When deleting from the beginning, this means copying everything else. When deleting from the end, this requires nothing. We then call the memory manager. The odds are greater that the memory manager will keep the same buffer when shrinking, but there are still times when it won't so it must copy all the data to the new location. Regarding delete from array vs array subset, delete from array is more expensive. Because delete from array has to handle cases where you delete from the middle, it doesn't produce a subarray. Array subset and split array always produce subarrays. This can reduce the overall number of copies, or it might just mean that the copy happens at the next node and the net result is no different.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.