Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by shoneill

  1. Logman, don't forget that immediately after writing a file, Windows will most likely have a complete copy of that file in RAM so your read speed will definitely be affected by that unless you're somehow RAM-limited or are explicitly flushing the cache. Always approach read speed tests with care. Often the first read will take longer than the second and subsequent reads due to OS file caching. Just for completeness.
  2. For simple atomic accessor access, splitting up actual objects and merging MAY work but once objects start doing consistency checks (perhaps changing Parameter X in Child due to setting of Parameter Y in Parent) then you can end up with unclear inter-dependencies between your actual objects. When merging, the serialisation of setting the parameters may lead to inconsistent results as the order of operations is no longer known. When working with a DVR, you will always be operating on the same data and the operations are properly serialised. Of course it's of benefit to have some way of at least letting the UUI know that the data in the DVR has changed in order to update the UI appropriately.... but that's a different topic (almost).
  3. Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying). That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.
  4. right-click the Tab when it's on the correct tab and select "Set Current Value to Default".
  5. So fine control of the buffer (setting it to 1 or two messages) would force synchronous messaging on the TCP driver level? That's rather useful.
  6. How do you set the TCP buffer, it's typically a driver setting, not a LV setting. At least AFAIK. If it's possible to limit the receive buffer of an ethernet card, I'd be interested to know. My experience (based on others' experience I must admit) is that this can't be controlled from within LV. If you fill the receive or transmit buffer of an ethernet card, you typically lose conenction. We see this sometimes when our host software can't keep up with our RT system sending data at 20kHz. Buffer overflow, lost connections, chaos ensues.
  7. I would imagine not flooding the buffer would be one, trying to synchronise sender and receiver is another. If the listener is the "master" then the protocol needs to be implemented this way. So if it can be done with 1 TCP port, even better.
  8. You need a different protocol. Have the reader send a "I'm waiting" packet to the writer, and have the writer simply wait until one of these is present in it's receive buffer vefore sending. This is a duplex communication, requires two TCP ports but should throttle as you require.
  9. Shaun makes a good point. I remember doing that decades ago when there was no RGB processing available. I did some processing on a mono version of an image but overlayed on a copy of the colour original. Imemdiately my boss asked me "How on earth did you manage to do that" because he assumed I had done the processing on the RGB image. He was kind of surprised to hear of what I did. The human brain automatically correlates the data visible (picture and results) to be a coherent pair even when they are not. I mean, Trump and President. Come on. <\joke>
  10. IIRC most cameras have different acquisition modes (mono, 4-bit, 8-bit and speed and so on) but the properties needed to be set are different per camera. You can query the modes available via the IMAQ driver and find the one you need, note the index of the mode and then set that mode. Note that this may be different for each camera type and your predecessor may have set it in code or in MAX.
  11. I have been experimenting with using polymorphic VIs with LVOOP. In my specific case, I have a large-ish hierarchy of very similar classes. Each one has a "Read" command. I have a base class which defines SOME of the functionality, but the "Read" method is not defined here since each datatype implements its own "Read", one for SGL, one for I32, one for Boolean and so on. As such, Dynamic Dispatch won't work (and I don't like Variants as inputs). This works OK except for when I want to swap out a SGL datatype with an I32 and the VI "Read" becomes broken because there is no inheritance link between the SGL "Read" and the I32 "Read" even though the two might even have the exact same name. Instead, I can create a Polymorphic VI with all "Read" variations in it (I can do this by scripting, it doesn't need to be manually maintained). Creation of the POLY VI is actually relatively simple. Choose a specific version of the method (file dialog), parse up to the root ancestor, parse back all children which have the same tag and add them to a POLY VI. To do this, I have added a line to the Description of the VIs involved (for example "<POLYOOP>Read</POLYOOP>". Once created, I can simply drop this polymorphic VI instead of a naked "Read" and everything works fine. Of course this means that all of my "Read" need to be static VIs because POLY VIs don't work with Dynamic Dispatch. A small trick is reversing the order of the concrete instances found when adding to the poly VI so that the most specific are listed forst, followed by more generic with the base class as the LAST entry in the Poly VI. This way, autoselection will choose the most specific version available according to the class type at the input. If I need a different version, I can still right-click the VI and choose the specific version I want. This can be used for allowing switching of methods (which may have different names AND different connector panes but which all represent the same action for an object). It allows for autoadaption of differently-named functions for two distinct classes irrespective of connector pane differences It requires a POLY VI to be maintained (Not part of any of the classes as this POLY VI can be project-specific) - automating the POLY VI creation alleviates most of this pain This operates on the visible wire type, not on the actual object type, so care is required to make sure the correct versions are actually being called (In my "Read" case, this is not a big problem) Creating arrays of varied objects will cause all to execute the method associated with the common denominator. This approach is targetted more at cases where we pass a command to a handler and retrieve an object of the same type (by utilising "Preserve Run-Time Class at the output of the handler) and then want to access the returned data. Is there a way to do this without a POLY VI? XNode? UIsing the POLY VI pulls in a lot of classes as dependencies (All which have the <PolyOOP>Read<\PolyOOP> text).... Being able to do the adaptation dynamically would be cool. I have no XNode experience but apparently XNodes and LVOOP don't mix too well? Perhaps most importantly: Is this a completely off-the-wall and stupid idea?
  12. I can't access either of the webcasts, I get a timeout connecting to ni.na3.acrobat.com Maybe this is linked to the new NI layout changes? Are there newer links which are still working?
  13. I was browing through the actor framework discussions on the NI site yesterday and I came across a statement by AQ. Never inherit a concrete class from another concrete class I had to think about that for a second. The more I think about it, the more I realise that all of the LVOOP software I have been writing more or less adheres to this idea. But I had never seen it stated so succinctly before. Now that may just be down to me being a bit slow and all, but in the muddy and murky world of "correct" in OOP-land, this seems to be a pretty good rult to hold on to. Are there others which can help wannabe plebs like me grasp the correct notions a bit better? How about only ever calling concrete methods from within the owning class, never from without? I'm learning for a long time now, but somehow, my expectations of LVOOP and the reality always seem a little disconnected. AQs statement above helped crystallise out some things which, up to that point, had been a bit nebulous in my mind. Well, I say I'm learning..... I'm certainly using my brain to investigate the subject, whether or not I'm actually LEARNING is a matter for discussion... The older I get, the less sure I am that I've actually properly grasped something. The old grey cells just seem to get more sceptical with time. Maybe that in itself is learning...
  14. shoneill

    Type cast

    Random Trivia: Type Cast is also the only way to cast a 32-bit Integer to a SGL and retain the bit pattern. I also had an application on a RT system which wanted to do this and when I realised this was not in-place (it takes approximately 1us for EACH U32-SGL conversion which was WAY too much for us to allow at the time) I had to scrap it and refactor my communications. We should get together and pester NI to offer an in-place bit-preserving method to convert between different 32-bit representations!
  15. Never mind, I just need to learn to read.
  16. Does the sync refer to updates to the WAL or to the DB file itself?
  17. I just came to the same conclusion while reading the documentation on WAL mode. Go figure. It seems the WAL file does not operate in synchronous mode at all. I wonder if that affects fobustness at all?
  18. First of all, great toolkit. Thanks. I am re-visiting some benchmarking I had started some time ago and have some observed some unexpected behaviour. After creating a file with an Index, I repeatedly overwrite the existing data with random DBL values (of which there are 500k Rows x 24 Columns). I see an Update rate of approximately 70k Updates per second (Wrapping all 500kx24 Updates in a single transaction). All good so far. Actually, it's 70k Updates where each Update writes 24 distinct Columns so it's actually 1.68M Value updates per second. But 70k Rows. I open the file once, Fill it with zeros (Total number of data points is known from the start - 500kx24 plue 3 Columns for 3D Indexing X,Y and Z), create an Index on the most important columns (X,Y,Z which are NOT overwritten later), prepare a statement for Updating the values and then re-use this statement in a loop for updating. I put all 500k x 24 Updates in a single transaction to maximise speed. Only after the benchmark VI is finished (after looping N times) do I finish the Statement and the SQLite file. All good so far, but now comes the weird part. When I tried investigating the effect of parallel read access I saw no decrease in UPDATE performance. Quite the opposite. When executing a QUERY from a different process (using a different SQLite DLL) whilst writing, the UPDATE speed seemed to INCREASE. The speed of updates went from 70k to approximately 124k per second. On a side note, this is also the speed increase seen when executing the UPDATE with "Synchronous=OFF". Has anyone seen something similar? Can I somehow use this fact to me advantage to generally speed up UPDATE commands? Is the synchronous mode somehow being negated in this situation? The whole thing feels weird to me and I'm sure I'm making a colossal mistake somewhere. I am writing the data in LV, reading using the SQLite DB Browser, so different DLLs and different application spaces are involved. I won't be able to control which SQLite DLL the users have for reading, so this is pretty much real-world for us. File system is NTFS, OS is Win7 64-bit. SQLIte DB Broswer is Version 3.9.1 (64-bit). It uses the V3.11 SQLite DLL as far as I know. I'm using LV 2015 SP1 (32-bit). I've observed this behaviour with both V3.10.0 SQLite DLL and the newest V3.15.2 SQLite DLL. Oh, and I'm writing to a HDD, not an SSD. My PRAGMAS for the UPDATE connection (some are leftovers from investigative benchmarking i.e. threads): PRAGMA threads=0; PRAGMA temp_store=MEMORY; PRAGMA cache_size=-32000; PRAGMA locking_mode=NORMAL; PRAGMA synchronous=NORMAL; PRAGMA journal_mode=WAL; The results are confusing me a little.
  19. For someone against OOP, your posts are really abstract.
  20. My dad, your dad? Oh come on. The code is not mine to share. It belongs to the company I work for. You also said for me to "show" you the code, not "give" you the code. I'll at least define elegance for you as I meant it. I call the solution elegant because it simultaneously improves all of these points in our code. Increasing readability (both via logical abstraction and LVOOP wire patterns) - This is always somewhat subjective but the improvement over our old code is massive (for me) Reducing compile times (FPGA compile time - same target same functionality - went from 2:40 to 1:22 - mostly due to readability and the resulting obviousness of certain optimisations) - this is not subjective Lower resource usage - again not subjective and a result of the optimisations enabled by the abstractions - from 37k FF and 36k LUT down to 32k FF and 24k LUT is nothing to sneeze at Increasing code re-use both within and across platforms - this is not subjective Faster overall development - this is not subjective Faster iterative development with regard to changes in performance requirements (clock speed) - this is not subjective That's basically just a rehash of the definition of "turing complete". So your statement is untrue for any language that are not turing complete (Charity or Epigram - thanks wikipedia). It also leaves out efficiency. While you could theoretically completely paint the sydney Opera house with a single hair, it doesn't make it a good idea if time or money restraints are relevant. I mean, implementing VI Server on FPGA could actually theoretically be done, it just won't fit on any FPGA chip out there at the moment.....
  21. There isn't a snowball's hope in hell that you're getting the full code, sorry dude. The "classical LabVIEW equivalent" as a case structure simply does NOT cut the mustard because some of the cases (while they will eventually be constant folded out of the equation) lead to broken arrows due to unsupported methods. There's no way to have anything involving DBL in a case structure on FPGA. Using Objects it is possible and the code has a much better re-use value. Nothing forces me to use these objects only on FPGA. I think you end up with an unmaintainable emalgamation of code in order to half-arsedly implement what LVOOP does for us behind the scenes. But bear in mind I HAVE done something similar to this before, but with objects and static calls in order to avoid DD overhead. Performance requirements were silly. Regarding callers having different terminals..... That's a red herring because such functions cannot be exchanged for another, OOP or not. Unless you start going down the "Set Control Value" route which uses OOP methods BTW. My preferred method is front-loading objects with whatever parameters they require and then calling the methods I need without any extra inputs or outputs on the connector pane at all. This way you can re-use accessors as parameters. But to each their own.
  22. Note, each and avery object can be defined by the caller VI. Each individual parameter can have a different latency as required. For 10 parameters with 4 possibly latencies, that's aöready a possible million combinations of latencies.
  23. Here are two small examples: H Here I have several sets of parameters I require for a multiplexed Analog Output calculation including Setpoints, Limits, Resolution and so on. Each of the parameters is an object which represents an "Array" of values with corresponding "Read at Index" and "Write at Index" functions. In addition, the base class implements a "Latency" method which returns the latency of the read method. By doing this I can choose a concrete implementation easily from the parent VI. If I need more latency for one parameter, I use my Dual-Clock BRAM interface with minimum latency of 3. If I am just doing a quick test or if latency is really critical, I can use the much more expensive "Register" version with Latency of zero. I might even go insane and write up a version which reads to and writes from existing global variables for each element in the array. Who knows? In this example I am using the base class "Latency" method to actually perform a calculation on the relative delays required for each pathway. By structuring the code properly, this actually all gets constant folded by LabVIEW. The operations are performed at compile time and my various pathways are guaranteed to remain in sync where I need them synced due to this ability. Even the code used to perform calculations such as "Offset correction" can have an abstract class but several concrete implementations which can be chosen at edit time without having to completely re-write the sub-VIs. I can tell my correction algorithm to "Use this offset method" which may be optimised for speed, area or resources. The code knows its own latency and slots in nicely and when compiled, all extra information is constant folded. i just need to make sure the interface is maintained and that the latency values are accurate. How to do this without LVOOP on FPGA? VI Server won't work. Conditional disables are unwieldy at best and to be honest, I'd need hundreds of them.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.