Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. There probably are things off the shelf (maybe look into car fleet trackers). This sounds like a fun project that you should just do because you can, though. Maybe later your charity can sell it to other organisations (paintball?) to raise some funds, If you get the users to use their own phones, you will even get cell location enhancement and cell location when GPS is unavailable. Remember that mobile phones are trracking devices that make telephone calls You could use RFID in addition to GPS. GPS is accurate to a few meters (10?) and RFID is cheap. They would be great for detecting when a room is occupied and by whom if the GPS cannot distinguish. I would even be tempted to make my own RFID senders with an arduino or similar, but it really depends on your time-scale. How long have you got? Whats the time constraints for this project? I'll give your charity some licences for the Websocket API, gratis, if you want to stream the data via websockets to peoples browsers in their tablets or phones (you only need them for development). Webservices? Data Dashboard? Meh! I thought you wanted real-time That technology is sooooo 2000
  2. Well. You mention that you don't want to use LVOOP because it makes it difficult to grasp for novices but then advocate a Muddled Verbose Confuser (MVC) architecture which even experts on that design pattern can't agree on what should be in which parts when it comes to real code. As it needs to be simple for novices, I also suggest you throw rotten tomatoes at anyone that mentions "The Actor Framework". Since there maybe many people who build on the code and many will have limited experience, Have you thought about a service oriented architecture? With this approach you only need to define the interfaces to external code written by "the others". They can write their code anyway they like but it won't affect your core code if they stuff it up. You can then create a plugin architecture that integrates their "modules" that communicate with your core application via the interfaces. The module writers don't need to know any complicated design patterns or architecture or even the details of your core code (however you choose to write it). They will only need to know the API and how to call the API functions..
  3. Xilinx14_7? Did you install that or did it come with your FPGA?
  4. yup. It is linear. A while later............. That was up to the default maximum number of columns (1000 ish for that version). As I was I was building version 3.8.6 for uploading to LVs-Tools I thought I would abuse it a bit and compile the binaries so that SQLite could use 32,767 columns (3.8.6 is a bit faster than 3.7.13, but the results are still comparable). I think I've taken the thread too far off from the OP now, so that's the end of this rabbit hole.Meanwhile. Back at the ranch..........
  5. Whatever was needed to achieve the required functionality. Probably a boolean, two numerics and an enum plus other stuff like refnums,events etc..
  6. It's an excellent point. For example. Trying to log 200 double precision (8 byte) datapoints at 200Hz on a sbRIO to a class 4 flash memory card is probably asking a bit much. The same on a modern PC with an SSD should be a breeze if money is no object. However. We are all constrained by budgets so a 4TB mechanical drive is a better fiscal choice for long term logging just because of the sheer size. The toolkit comes with a benchmark, so you can test it on the hardware. On my laptop SSD it could just about manage to log 500, 8 byte (DBL) datapoints in 5ms (a channel per column, 1 record per write). If that is sustainable on a non-real time platform is debatable and would probably require buffering. 200 datapoints worked out to about 2ms and 100 was under 1ms so it could be a near linear relationship between number of columns (or channels, if you like) and write times. The numbers could be improved by writing more than one record at a time but a single record is the easiest. I have performance graphs of numbers of records verses insert and retrieve times. I think I'll do the same for numbers of columns as I think the max is a couple of thousand.
  7. So can we not prepend an index to the name in the variant attribute list to enforce a sort order and strip it off when we retrieve? (I really should download it and take a proper look, but having to find and install all the OpenG stuff first is putting me off...lol)
  8. <Dumb question> The request is for values not to be sorted alphabetically.. Is the solution just not to sort? Do we, indeed, sort them? Is it part of the prettyfier rather than the encoding. Where exactly is this sorting taking place?
  9. A relational database (RDB) is far more useful than a flat-file database, generally, as you can do arbitrary queries. Here, we are using the query capability to decimate without having to retrieve all the data and try and decimate in memory (which may not be possible). We can ask the DB to just give us every nth data-point between a start and a finish. To do this with TDMS requires a lot of jumping through hoops to find and load portions of the TDMS file if the total data cannot be loaded completely into memory. That aspect is a part of the RDB already coded for us. It is, of course, achievable in TDMS but far more complicated, more coding and requires fine-grained memory management. With the RDB it is a one-line query and it's job done. Additionally, there is an example written that demonstrates exactly what to do, so what's not to like? If the OP finds that he cannot achieve his 200Hz aquisition via the RDB, then he will have no other choice but to use TDMS. It is, however, not the preferred option in this case (or in most cases IMHO).
  10. Yes. I see. SQLite is actually the reason here. They removed localisation from the API some time ago so that it will only accept decimal points So the solution, as you say, is to use "%.;" in the query string to enforce it. And yes, you may run into another issue in that the SELECT for double precision uses the Labview primitive "Fract/Exp String To Number Function". This will cause integer truncation on reads of floating point numbers from the DB on localised machines. I've created a ticket to modify it, You can get updates as to the progress from there. In the meantime your suggestion to use the string version of SELECT and use the ""Fract/Exp String To Number Function" yourself with the "use system decimal point" is correct. You can also set to false or to modify the SQLite_Select Dbl.vi yourself like this. Those are the only two issues and thanks for finding them. Both the changes will be added for the next release of the API which, now I finally have an issue to work on as an excuse, will be in the next couple of days
  11. That's awesome. I'm still trying to figure out how you moved the pieces' images from boolean to boolean without looking at the code and that's before I get to the AI
  12. Congrats. Here's your promotion
  13. Can't your PC box do it? You are plugging it in via USB, so making it available over TCPIP should be straight forward since it has Gigabit LAN and WiFi. Another alternative is Raspberry PI, but now we don't need your box or LabVIEW!
  14. A poor excuse. That is the kind of thing I would expect from a 10 yr old "script kiddie" over at the NI forums You are not, I suspect. (if you are, then we have lots of time to train you better )
  15. I think you are after an analogue frame grabber rather than a trans-coder.. Something more akin to the boxed version of the VRmagic AVC-2 (I've never used it, but give it as an example)
  16. Ssssshhh, Don't tell Rolf. But there is a superb library that no-one is supposed to use.
  17. Can you post the error? Row and column count return integers and should not have anything to do with decimal points. Yes. You can have parallel reads (but only a single write without getting busy errors) The syntax is standard SQLite SQL which is 99% compatible with M$ and MySQL. If you work with DBs, you have to learn SQL.
  18. There is the "Data Logging" example which demonstrates this exactly in the SQLite API for LabVIEW. The issue would be whether you could log continuously at >200Hz - maybe with the right hardware and a bit of buffering.
  19. I sit firmly on the other side of the fence with things like this. When it's a choice between form over function; function wins - especially if the spec can be interpreted ambiguously via tortured semantics. Adhering to specs by observing a strict negative of a positive statement (it doesn't say they cannot be ordered, rather, I expect it was stated thus so it didn't restrict) is why the native 2013 JSON is inferior (IMHO) to this library - speed aside. This library works when the native one throws errors because the native one adheres even more strictly, word for word, to the spec. inf and NAN are examples of where some libraries have implemented function over form. In that case, it was in spite of the spec which specifically disallowed it. There is precedence here if there is enough benefit. Would it really hurt that much to make the output look exactly like the input which is what we all kind of expect and know to be right? Relying on a specs throw away description about an unordered list seems a bit of a cop out to me and it would probably make testing much easier and simpler as you could do a straight input/output compare. It wouldn't break existing code, either. So I'm not sure what the resistance is apart from the effort required which has already been done.
  20. Oh, I don't know. I think we appreciate it here but condemn most of these things because NI will do something about it now they know for sure it's been compromised
  21. Hmmm. I just sat down and attempted to give Rex websocket capabilities since it is already event driven, follows the Open. Write/Read and Close structure and *should* have slotted straight in with a simple wrapper (I'm updating the Websocket API to use HTTPS and thought this would be a great use case). The only issue I found was that there is no background "service", so to speak, that could monitor the TCPIP to generate the events since VI Server methods evoke directly remotely (via the ACBR). Did you add a background VI service, or did you go through the web service interface to get to the users webservice VIs?
  22. Once the buffer has every point written to, continue writing at buffer[usersize+1] wrapping back around to buffer[0] after buffer[usersize+usersize] is written Nearly.. After i=usersize, you wrap around to buffer[0] all the while writing to buffer and buffer[i+usersize]. Then, if you read at index i for a length usersize, you will get what you need.
  23. It doesn't prevent wrap-around from happening. It means that you can read a contiguous array of data at any point in the buffer without having to chop and concatenate.
  24. Not so trivial in native labview since to read contiguous bytes using the array subset, you need to split and wrap the array by making copies and sewing them together again. You'll be lucky of it doesn't get a lot slower than a rotate. There is one that uses direct calls to the LV memory manager though, but a rotate is much easier.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.