Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. No, well. not really. It depends if you are going to have nulls in your data then you could use the C String and not worry about it. However. I'm guessing that because you are looking at string length bytes (pascal style strings can be no more than 256 bytes by-the-way) that you are intending to pass arbitrary length binary data that just happen to be strings.. . There are two ways of transferring variable length data to/from a library. Memory is allocated by labview and the library populates this memory the data (library needs to know the size and the resultant data must be no more than that passed - create and pass array like the ol' for loop of bytes) Memory is allocated by the library and labview accesses this memory (Labview needs to know the size and the resultant data can be any size- moveblock ). Either way. One or the other needs to know the size of the allocated memory. The general method is case no.2 since this does not require pre-allocation, is unlikely to crash because the data is too big and only requires one call for allocation and size, You call the function and have the size as one of the returned parameters and a pointer (uintptr_t) as the other Then use Moveblock to get the data (since it will be known at that point by the size parm). You will also need a separate function to release the memory. This also happens to be the fastest CDECL calling convention is the one of choice as the STDCALL is windows specific (you are intending to port this to other platforms.....right?)
  2. Apart from Daklus sound advice. You might also check that you are using the High Performance Ethernet driver. Not so much for bandwidth, but more for CPU usage. Missing pieces of the image (or entire images) is usually due to bandwidth saturation/collisions. Just because a camera is capable of supplying images at a frame-rate and resolution doesn't mean that it can all be squirted over a LAN interface. You generally have to play with the camera settings to get things working nicely. Whilst the "theoretical" maximum of a 1 GbE is 125MB/s, in reality I have never achieved more than about 100MB/s reliably (assuming jumbo frames are enabled) and a 100Mb interface you will be lucky to get 10MB/s (rule of thumb is about 80% of interface speed). If Jumbo frames aren't being used (default is usually 1500) or are not supported by the interface, then this is usually the bandwidth restriction and you will have to go down to lower resolutions and frame-rates as the packet overhead crucifies the performance (note that if you are going through a router or switch, jumbo frames will also have to be turned on for these devices and match the packet size of the LAN interface).
  3. In my case. I'm pretty sure it's my leopard-print lycra underpants with the elephant trunk codpiece.
  4. Like Comic-con (nerds) without the babes in lycra You shouldn't lose sleep over it
  5. I've done it many times, even to the point of a complete scripting language for one client. It works well as long as you don't have to handle state too much and it is sequential. You end up with files of the formCHECK_VOLTS, TCPIP->READ->MEAS:VOLT:DC?, %.2f Volts, 10, -10
  6. Well. You haven't really given much detail since if a parent class is too much hassle. I would guess anything else would be too. However. There are some simple edge-case scenarios where you can make a flexible system that can be extended without writing additional code at all. Consider a DVM that is CMD->Response for all operations. By defining a "translation" file you can convert operations to commands (or a series of commands) and expected results so that you can use a simple parser and just choose the file dependent on device. If a new DVM is used (but the tests are the same) then you just create a new translation file. You can extrapolate this technique for the tests themselves too. However. It all depends on the system, what you need to do and the complexity/flexibilty required. None of that has to do with classes or VIs, however. It's more a design choice to abstract away hw dependencies from the programming environment.
  7. Of course. A polymorphic VI has the feature that although you must have identical conpane layouts and directions. You can have different terminal types and numbers of defined terminals. However, the info for selecting the method is by the type wired to it or selection by the developer. In my example, I would just be requiring that class behaves like a polymorphic VI at run-time and method selection is dependent on the object rather than the data type wired to it. (I in fact, see no semantical difference between a polymorphic VI and a DD class except the choice mechanism).
  8. Well. I would argue it does exist. A vi is calling the parent (the sub-vi). You just don't have your hands tied by the syntax of classes.Is a vi not "inheriting" from a sub-vi (Create sub vi menu command)? Can you not "override" the sub-vi behaviour? Are you not "encapsulating" by using a sub-vi?. Can you not "compose" a sub-vis inputs/outputs? I have always argued that the only thing LV classes bring to the table above and beyond classic labview is Dynamic Dispatch and a way of organising that makes sense to OOP mindsets. If you are not using DD, then all you have is a different project style and a few wizards that name sub vis for you.. If you look at your example, then you are simply calling a sub-vi. However. You are restricted in doing so by a peculiarity of the implementation as a class.
  9. I'm still not getting this (emphasis added by me) It is valid syntax if you don't use classes since it is simply calling a common sub-vi. You seem to be (in this example) arguing for classic labview behaviour (which already exists).
  10. Indeed. So you are getting the status directly from the device as and when you need it rather than "remembering" state in the software. It doesn't matter what the method of transmission through the software is (actors, events, queues, notifiers or whatever-a good point to again whinge about not being able to hook up VISA refs to the event structure ).
  11. Unless your hardware uses push streaming (rare as rocking-horse droppings). The hardware cannot tell the UI since it will be CMD->RESP.
  12. To paraphrase Hoover............ With a stop button only. If you click the X button on the form (as users have been trained to do). Your panel will disappear (so can't get to the stop button) but your VI will still be running.
  13. Google is your friend But looking at the javascript source here should help you out As for ranges. Well. They key is to convert your measured frequency to numbers of semitones from a base frequency. You are not counting in frequency, rather, semitones so round up or down fractions of semitones/crotchets/quavers/lemons/whatever (just be consistent). @Hoover. Glad I don't work for you.lol.
  14. Well. The case statement is looking a bit unwieldy. You can take advantage of the fact that an octave is 2^n of the fundamental and a semitone is the 12th root of 2 (or about 1.059)since you are only interested in the fundamental frequency. Then you should be able to calculate directly and represent it as a bit pattern.
  15. Having dealt a lot with hardware systems. There were a few conclusions I cam to quite a few years ago.That is that with hardware settings, you should always rely on the device to tell you what you can and can't do. If you start restricting user input values before sending to the device, you very quickly end up trying to replicate the device logic which can get very complex. With this in mind, it becomes just a case of setting max/min values to the controls. Additionally, most modern devices give readable error responses, so usually you only need to bubble them back to the UI. Similarly. When it comes to device state. You should not try and maintain it in the software, rather, interrogate any state information as and when you need it. Maintaining device state in the software will vastly over complicate your code and lead to hard-to-debug disparities between the actual device state and that of your software. This situation is totally avoidable and means that very simple recovery procedures can be facilitated without complex code and the code can reflect state without logic. If you bear in mind these "rules-of-thumb". You devices will do all the hard-work and greatly simplify your code as well as being far more robust.
  16. I've always rolled my own. However. I've heard some good things about this one but from C# programmers rather than labview (they have an eval kit).
  17. Well. Leaving aside the economic arguments for now..... The big thing about bitcoin is there is no centralised accountancy, it is almost an instantaneous peer-to-peer transfer. And it is extremely hard to trace (anonymity). A few years ago you might argue that the latter was only a benefit to criminals. With current events (SOPA, Snoopers Charter et. al.) many ordinary folk feel that they need to protect their privacy. Many economies are also being sucked dry by corporate interests and other monetary systems have arisen to combat this so people are looking for alternative strategies. Bitcoins are gaining popularity mainly because of these two points and you could say it's filling a need at the right time. Bitcoins obtain their "value" from scarcity (limited maximum number of 21 x 10^6) and by the energy consumed in generating them (electricity to run a bitcoin miner). There are a couple of bitcoin "exchanges" where you can convert bitcoins to fiat which is probably a more understandable, although flawed, measure of value (as of writing 1 bitcoin~= 14 Euros). ...... But on to the interesting stuff ...... Bitcoins are "mined" (that's the terminology) just as resources (like gold) are. Instead of digging a hole, however, a computationally expensive algorithm (a hash) is used by a Bitcoin Miner. This is synonymous with password "cracking" in computational terms, but the intent is quite different. Whilst in the early days you could mine with a fairly low spec PC. The "difficulty" has progressed to the point where you now need dedicated hardware-usually racks of GPUs in the 10s of Giga-hashes per second. (This has led to an argument that early adopters gain a significant wealth advantage, but let's stick to the interesting stuff ) The design of the bitcoin system is such that there is a maximum of 21 million that can ever be created (currently there are approximately 10M in circulation). A bitcoin itself is just a history of transactions (the block chain) from the "base" bitcoin block and the difficulty in "cracking" the hash changes in relation to the rate at which bitcoins are mined. (I should also point out that bitcoins are not mined singularly, but in batches). The number of coins in the batch (currently 50 at this time) also reduces as more bitcoins are mined meaning that the cost/BC (electricity consumed) increases with more bitcoins in circulation. You can get a more concise overview here. Suffice to say. On the surface it looks fairly straightforward (just money, right?) but the deeper you go, the more you realise just how much thought has gone into the system and how technically elegant it is (and not a banker in sight!). I can see a couple of issues that need to be resolved (bitcoins can be destroyed but not recreated and defence against a party taking over 50% of the network). But it does look very promising and certainly has gained the attention of the authorities.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.