Jump to content

gb119

Members
  • Content Count

    302
  • Joined

  • Last visited

  • Days Won

    5

gb119 last won the day on March 9

gb119 had the most liked content!

Community Reputation

27

2 Followers

About gb119

  • Rank
    Extremely Active
  • Birthday 05/27/1973

Profile Information

  • Gender
    Male
  • Location
    Leeds, UK
  • Interests
    Experimental condensed matter physics

Contact Methods

LabVIEW Information

  • Version
    LabVIEW 2018
  • Since
    1995

Recent Profile Visitors

3,168 profile views
  1. The original article has been updated with a new version of the JuPyter Client build as a VI Package. This version has switched the parser to JSONText and removed the OpenG dependencies, and fixed up a bunch of bugs. The example client application gets installed into the example finder. It's still far from production ready code...
  2. That was the direction I was thinking in - when I've finished unbreaking the effects of changing JSON parser 🙂
  3. So for transferring largish chunks of floating point data (e.g. image data or similar) JSON is quite unwieldy (and I worry about losing precision when round tripping to/from ascii data). I haven't looked at it in enough detail, but it seemed to me that there was a good chance that the numpy representation of a double float and LabVIEW's might be sufficiently close that one could do a fairly efficient pack/unpack operation. I take the point about the speed of the JSON serialisers - the choice was more influenced at this point by what I was used to workign with than speed. I'm looking at replacing it with yours and ijn the process keeping more of the message as raw JSON strings rather than storing in arbitary lusters in variants....
  4. For a while I've been tinkering with the idea of building a LabVIEW client that could to talk to Jupyter kernels for interfacing with Python having been previously a user of RolfK's OpenG LabPython package. Although this, and now the native LabVIEW 2018 Python support have many uses (and indeed I use them in my 'production' code), there were a few things that a Jupyter kernel client can do: Not be tied to particular versions of Python - LabPython got stuck for me around 2.7.10 and I think was fussy about which compiler have been used. The 2018 native support is restricted to 2.7 or 3.6 I believe (3.7 defintiely doesn't work) Not being tied to the same 32/64 bits of LabVIEW Being able to offload the Python to a remote server, or go cross platform I haven't investigated the Enthought package (too much hassle to get a new vendor set up on my University's purchasing system and not really able to justify spending tax payer's money on playing!) which I suspect might be doing something similar. Anyway, the attached zip file is a proof of concept - it includes a test vi that will try to find an ipython executable and fire it up and you can then interact with it. There's lots of things not properly tested and probably a slew of bugs as well. To run it you need several dependencies: OpenG Toolkit libraries, particularly the LabVIEW Data, string, error and array libraries The JKI JSON library - I had to pick a JSON serialiser and the JKI one seemed as good as any and better than some... The JSONText JSON serialiser library available via VIPM The Zero-MQ Labview bindings - libzmq is the underlying network transport used in Jupyter and there is an excellent LabVIEW bindings library for it. The attached SHA256 implementation so that the communications messages are properly HMAC signed. LabVIEW 2018 - sorry I'm only writing in 2018 now and this code uses malleable vi's with type specialization and asserts in use - so it may not be easy to backport There's a few things that I'd still like to figure out - primarily the client protocol is very much focussed (reasonably enough) around the idea that the client is sending strings and is interested in string representations of data.I'd like to figure out an efficient way to transfer largish LabVIEW data structures backwards and forwards. I think this probably means developing a custom message handler and registering it with the kernel when the code starts and writing some Python 'flatten to string' and 'unflatten from string' code - but that's only this week's concept.... If you use it, please note that this probably only alpha quality at best - it may or may not work for you, it may not be safe to use, If it causes any loss or damage or eats your cat then it's not my fault.... Edit 6th MArch 2019: I've switched the JSON parser to JSONText, found and fixed a few bugs, managed to build a VI package for it that should have the correct dependencies and installs the example client in the LabVIEW example finder. university_of_leeds_lib_sha256-1.0.5.3.vip university_of_leeds_lib_jupyter_client-1.0.1.5.vip
  5. I would use a lockin amplifier with some sort of comms interface:-) Failing that you could simply implement the same algorithm - multiply your signal by sine and cosine waveforms at the refetence frequency, low pass filter, calculate the mean and feed into x,y to polar co-ordinate convertor. Altetnatively, up-sample the signal, feed into an FFT, index out the right (complex) element and covert to r,theta.
  6. Having to use the accessor vi rather than a property node tripped me up - I can see how the compilation issues mean not accessing the private data directly, but not the difference between calling an accessor vi directly or inside the property node.....
  7. I've seen variants of 'Vi Failed to Compile' with projects involving complicated class hierarchies and class instances as data members of other classes etc. Nothing particularly new - I've seen this in versions of LabVIEW from 8.6 onwards - usually simply opening the offending Vi 'standalone' is enough to get it to compile then everything is ok until the next time that LabVIEW decides it needs to compile all my code. Now in 2018 64bit I've got a slightly odd one where when I open my top level Vi, LabVIEW decides that the Actor Framework isn't compiling, but if you open that first and then load the top level Vi everything is fine... I find it incredibly hard to debug the issuse as it only happens when one has a sufficiently large project. Keeping compiled code separate seems to make it more likely, as does having very large class hierarchies or classes with private data containing classes. Actor Framework also seems to be a contributing factor. The error message is almost as helpful as the old MacOS installer error "There was a problem with the hard disc, please use a different one....."
  8. Assuming that "crap" is not a vialbe option, I like "brittle" too. "fragile" to me would imply code that is susceptible to runtime errors as opposed to one that is easily broken by subtle changes in editing. Perhaps "chaotic"in the maths sense might suit?
  9. Well, the screenshot you showed implies that Python threw a Syntax Error exception - although I can't immediately see why. There are a couple of issues that I can see in your code though: 1) You've got several Windows style paths quoted in strings there, but \ in Python is an escape character - to include \ in a string you need to either do "C:\\Python27\\Lib" or else use a raw string r"C:\Python27\Lib" 2) There is a major limitation to do with thread safety between Python and LabVIEW - LabVIEW is intrinsically multi-threaded and some Python libraries - such as numpy - are not. This can lead to seemingly random crashes in LabVIEW - so not your current problem, but likely to become one... There's been quite a bit of discussion in various threads here on this problem by folk who know more than me about the internals of both LabVIEW and Python.... (and I've just spotted, does np.loadtxt close files when passed an open filehandle - if not then you'll be leaving files left open by your script.)
  10. As well as having 32 bit LabVIEW, is your Python 32bit as well? What distribution of python are you using (or vanilla python 2.7?) I've failed to get recent versions of anaconda Python to play nicely with LabPython whereas vanilla 2.7.10 from python.org was ok. Sometimes I find that running VI Package Manager via 'Run as Administrator...' is necessary even though my user account is a member of the Administrators Group. The error looks like it's a file permissions problem, so probably the latter is the first place to try...
  11. Indeed - this is what I had in mind as 'keeping state in the sub-vi'. The OpenG library polymorphic vi is a nice little wrapper for this.
  12. In the specific example I had in mind, keeping the input at the same value would cause the instrument to reset, whilst not changing the input doesn't - so 'No Change' is different from 'the same again'. You can then either expose the "No Change" option to the end user - but that might be more confusing. The alternative (which is what I normnally end up doing) is to have separate enums/rings for the UI and the underyling driver - but then you have to keep them consistent otherwise it gets very confusing(!). (there's other alternative strategies like maintaining state either in the caller or subvi - or possibly hardware - of course). In the subvi, I'd argue that code that detects sentinel values e.g. by looking for NaN in floats or default values for rings would be less transparent than each control having a property node IsWired? that is only true if that control was wired when that subvi was called. Bounds testing on control values is then just about making sure its a valid value and doesn't get overloaded with a "value was supplied" meaning as well.
  13. I'd quite like this feature for cases where the natural type for an input is a ring or enum, but you also want a "don;t change" as the default option. Yes, you can have "don't change" as one of the enum/ring values, but that then gives an ugly API. There's certain instrument driver type subvi's I have where the instrument has a mode e.g. a waveform shape, and some common parameters - amplitude, offset, duty cycle. One could have separate subvis for each of these parameters, but it's also nice to have just the one vi that would change only the parameters that were wired up and would leave alone those that weren't.
  14. HDF5 is used quite widely in big facilities based research (synchrotrons, neutron sources and such like). It's a format that supports a virtual directory structure that contains meta-data attributes, and multi-dimensional array data sets). Although it's possible to browse and discover the names and locations of all of the data in the file, it's generally easier if you have some idea of where in the virtual file-system the data thatr you are interested in is being kept. There;s a couple of LabVIEW packages out there that provide an interface to HDF files and read and write native LabVIEW data types - they work well enough in my experience, although I haven't personally used HDF5 files in anything other thn proof of concept code with LabVIEW.
  15. One of the things I wanted to be able to do was to read from both ends of the buffer - so to have a way to get to the most recent n items for n less than or equal to the size of the buffer. So I tweaked the read buffer XNode template to handle a negative offset meaning to read from the current buffer position backwards (i.e. Python style). This, of course breaks LabVIEW array indexing semantics, but for this particular situation it makes some degree of intuitive sense to me. The attached is my hacked template. XNode Template.vi
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.